Case Study 2: Johns Hopkins COVID-19 Dashboard and Dashboards in Crisis

On January 22, 2020, Lauren Gardner, a civil engineering professor at Johns Hopkins University, asked her PhD student Ensheng Dong to build a dashboard that would track the growing number of COVID-19 cases. They expected it to be a small project for Gardner's research group. Within weeks, it was one of the most-visited pages on the internet, viewed by billions of people and cited in hundreds of news articles. By the end of 2020, the Johns Hopkins COVID-19 dashboard had fundamentally shaped how the world understood the pandemic. The story of how it was built — and how it scaled from a research project to a global information infrastructure in a matter of days — is a case study in how dashboards succeed (or fail) under extreme pressure.


The Situation: A New Disease and an Information Gap

In early January 2020, reports began emerging from Wuhan, China, about a novel respiratory illness. Infectious disease researchers watched closely, but the public information was fragmented. The WHO published updates periodically. Chinese health authorities released case counts but not at a granular level. News organizations reported stories but not structured data.

Lauren Gardner, a professor at Johns Hopkins' Center for Systems Science and Engineering (CSSE), had been working on infectious disease modeling for years. She recognized that a map of confirmed cases, updated in real time, would be useful for researchers and policymakers. On January 22, 2020, she asked Ensheng Dong — a first-year PhD student in her lab — to build one.

Dong's instructions were minimal. Build a dashboard. Show cases on a map. Update it as new data comes in. Use whatever technology you know. Gardner expected the project to take a few days and serve a small audience of disease researchers.

Dong built the first version in about a day. It used a free account on ArcGIS Online (Esri's cloud-based mapping platform), which provided a drag-and-drop dashboard editor. He added a choropleth map colored by case count, a list of affected regions, and a running total at the top. The dashboard was simple — no cross-filtering, no time sliders, no fancy charts. Just the facts.

Dong published the dashboard on the CSSE website and shared the link with Gardner. On January 22, the first day of publication, the dashboard had about 200 visits.

The Explosion

What happened next surprised everyone. The dashboard's first week saw thousands of visits. The second week, tens of thousands. By mid-February, hundreds of thousands per day. By March, when the WHO declared a pandemic, the dashboard was receiving over a billion visits per day — more traffic than most major news websites.

The reasons for the explosion:

No one else had it. The WHO updated its official dashboard daily, not hourly. Chinese, European, and American health agencies all had their own dashboards, but none aggregated global data in a single view. The Johns Hopkins dashboard was the first (and for a time the only) global COVID-19 tracker.

Journalists picked it up. Reporters writing about COVID-19 needed a primary data source, and the Johns Hopkins dashboard was the most comprehensive. They embedded screenshots in articles, linked to it in coverage, and mentioned Johns Hopkins' name as the authority. Each mention drove more traffic.

It was correct. Dong and his collaborators at Johns Hopkins (including Du Hongru and a growing team of students and researchers) manually verified the data against multiple sources. Official health agency reports, news stories, social media, and direct contact with health departments were all cross-checked. When numbers disagreed, they investigated and reported the most-likely-correct value. This verification was exhausting but essential — a dashboard that was wrong would have been worse than no dashboard.

It was free. Many commercial COVID-19 data products appeared during 2020, but most had paywalls or required registration. The Johns Hopkins dashboard was free, public, and embeddable. Journalists, teachers, researchers, and the general public could all use it without any barrier.

The Technical Challenges at Scale

A billion visits per day is extraordinary traffic. The original ArcGIS Online dashboard was not designed for this scale. The Johns Hopkins team had to scramble to keep the dashboard running.

Early crisis (late January 2020): the dashboard went down repeatedly as traffic exceeded ArcGIS Online's free-tier limits. Gardner and her team worked with Esri to get an enterprise license. The dashboard was migrated to Esri's paid infrastructure within days.

February 2020: traffic continued to grow. The team added CDN caching and optimized the dashboard for faster load times. They minimized the data payload, cached the map tiles, and reduced unnecessary visual elements. Performance improved but was still strained.

March 2020: the WHO declared a pandemic, and traffic spiked again. Several countries' health agencies started using the dashboard as an official source, embedding it in their own sites. The team worked with their IT department to set up additional server capacity, CDN edge nodes, and monitoring. At peak, the dashboard was serving requests from every country on earth.

Ongoing (April 2020 onwards): the team built more sophisticated data pipelines. Instead of manual verification of every data point, they used automated scrapers for official sources, with human review only for discrepancies. They added new metrics (deaths, recoveries, tests, vaccinations) as the pandemic evolved. They published the underlying data on GitHub (at github.com/CSSEGISandData/COVID-19) so other researchers could use it.

The infrastructure challenge was solved gradually, not all at once. At each stage, the team added capacity just fast enough to handle the next wave of traffic. Multiple times, the dashboard almost went down under load — only heroic efforts by the CSSE team and Esri engineers kept it running.

What the Dashboard Showed

The Johns Hopkins dashboard's visual design was minimal and utilitarian:

A world map colored by case count per country, with red regions indicating current outbreaks. Users could click a country for details.

A running total at the top: total confirmed cases, deaths, and recoveries (later supplemented with vaccinations). These numbers updated as new data arrived — often multiple times per day.

A country list ranked by case count. The top five or ten countries were always visible, so you could see which outbreak was most severe.

A time-series chart showing cases over time, with log and linear toggles. The time series let users see whether the curve was flattening or accelerating.

A table of recent updates with country, date, and case count, sourced from the CSSE GitHub repository.

There were no fancy features. No cross-filtering, no advanced visualizations, no interactive controls beyond clicking a country. The dashboard was deliberately simple because the traffic volume demanded it — every unnecessary feature would have slowed the page and increased the infrastructure load.

The design decisions were also about trust. A dashboard that claims to show "the truth about COVID-19" to a billion users has to be absolutely reliable. Every feature is a potential source of bugs. By keeping the feature set small, the team minimized the surface area for errors. The reliability was part of the user experience — users trusted the dashboard partly because it was boring.

The Lessons for Dashboard Builders

The Johns Hopkins story offers several lessons for dashboard practitioners.

1. Simple beats fancy under pressure. The dashboard had no cross-filtering, no advanced analytics, no ML predictions. It just showed numbers on a map. Every decision about features balanced "adds value" against "adds complexity." Under extreme load, simplicity wins.

2. Data quality matters more than visualization. The team's biggest effort was not the dashboard UI but the data verification. Getting the numbers right — pulling from multiple sources, cross-checking, reconciling discrepancies — was the hard part. The visualization was the easy part. For most dashboards, this ratio is similar: the data pipeline is where the effort goes.

3. Infrastructure has to scale with attention. A dashboard that goes viral will crush its original infrastructure. Plan for this possibility. Have a path to scale up quickly (CDN, caching, enterprise hosting) even if you don't start there. The Johns Hopkins team was able to scale because Esri was willing to help; without that, the dashboard would have failed.

4. Open data builds trust. Publishing the underlying data on GitHub was transparent — anyone could check the numbers. Researchers, journalists, and other data scientists used the CSSE repository as a primary source, which reinforced the dashboard's authority. Hiding data would have been faster but would have damaged trust.

5. Small teams can make huge impact. The core team behind the dashboard was a handful of people — Dong, Gardner, and a few colleagues. They built something that a billion people used. The lesson is not that you need a big team; it's that one focused team can scale a dashboard to extraordinary impact if the timing and the subject align.

6. Authority comes from being right, not from being fancy. The Johns Hopkins dashboard became authoritative because its data was correct and its sourcing was transparent. Other COVID-19 dashboards with more features, better styling, and more interactivity did not achieve the same status. Being right was the differentiator.

Theory Connection: Dashboards as Public Infrastructure

The Johns Hopkins dashboard was not a conventional analytics product. It was a piece of public infrastructure — like a weather map, a traffic report, or a stock ticker. During a crisis, it served a function that no one else served, and its impact was measured not in user engagement but in lives potentially informed.

Public-infrastructure dashboards have different requirements from business dashboards:

  • Correctness is non-negotiable. A wrong number on an internal dashboard causes a bad decision; a wrong number on a public crisis dashboard causes panic or complacency.
  • Availability matters more than features. A dashboard that's up 99.99% with three features is better than one that's up 90% with twenty features.
  • Transparency is essential. Users need to see the data sources, the update times, and the methodology. Hidden processes erode trust.
  • The team becomes accountable to the public. Once millions of people use your dashboard, you cannot take it down for maintenance at arbitrary times. The team becomes responsible for continuous service.

These requirements are unusual for most Python dashboard projects. Most dashboards serve a small internal audience and can tolerate downtime, imperfect data, and evolving features. But the Johns Hopkins example reminds us that a dashboard's purpose shapes its constraints. A tool for 20 internal users has different requirements from a tool for 20 million external users, and the design decisions should reflect those requirements.

For practitioners, the takeaway is: match your dashboard's rigor to its intended use. A quick Streamlit prototype for internal testing does not need the reliability of a COVID-19 public tracker. A production dashboard for customer-facing analytics probably does need most of those qualities, even if the audience is smaller. Think about what happens if the dashboard is wrong, if it goes down, if it is misinterpreted, and design for those scenarios.


Discussion Questions

  1. On scaling under pressure. The Johns Hopkins dashboard went from 200 visits per day to 1 billion per day in a few weeks. What technical and organizational preparations would you make to handle similar growth?

  2. On simple vs. fancy. The dashboard had few features. Did this limit its usefulness, or was simplicity the right choice? Under what circumstances would more features have been better?

  3. On data quality. The team spent most of their effort on verification, not visualization. Is this the right ratio for most dashboards, or is it specific to high-stakes public dashboards?

  4. On open data. Publishing the data on GitHub was a trust-building decision. Should all public dashboards do this, or are there legitimate reasons to keep data private?

  5. On the dashboard's legacy. COVID-19 case counts are no longer newsworthy, but the dashboard is still running. What do you do with a dashboard that has outlived its original purpose?

  6. On your own dashboards. If your next dashboard project went unexpectedly viral, what would break first? How would you prepare?


The Johns Hopkins COVID-19 dashboard is an unusual case: a small research project that became global information infrastructure during a crisis. Its technical choices — simple, reliable, transparent — serve as a benchmark for dashboards that need to be trusted at scale. When you build your own dashboard, even one for a small internal audience, consider what would happen if it suddenly had a thousand times more users than you expected. The Johns Hopkins team's preparation was inadequate for their reality, and they scaled up under pressure. Hopefully your dashboards face less dramatic tests, but the lessons of simplicity, correctness, and transparency apply regardless of scale.