Case Study 2: Color and COVID — How Dashboard Color Choices Shaped Public Understanding of the Pandemic
When Color Became a Public Health Intervention
In January 2020, a novel coronavirus began spreading globally. Within weeks, data dashboards became the primary interface between the public and the pandemic. For millions of people, the daily ritual of checking COVID-19 dashboards — case counts, death tolls, hospital capacity, vaccination progress — was their most sustained engagement with data visualization in their lives.
The color choices on these dashboards were not neutral design decisions. They shaped how people perceived risk, urgency, and progress. A dashboard that used aggressive reds made the same data feel more alarming than one that used muted purples. A traffic-light color scheme with arbitrary thresholds created sharp perceptual boundaries that did not exist in the underlying data. A vaccination map that used green for "above 60% vaccinated" and red for "below" communicated a binary reality — safe or unsafe — when the underlying relationship between vaccination rate and community protection was continuous and uncertain.
This case study examines how different organizations made color choices for their COVID dashboards, what those choices communicated (intentionally and unintentionally), and what principles from this chapter were followed, violated, or tested in unprecedented conditions.
The Johns Hopkins Dashboard: Red on Dark
The Johns Hopkins University Center for Systems Science and Engineering launched its COVID-19 dashboard on January 22, 2020 — one of the earliest and most widely used pandemic trackers. At its peak, the dashboard received over one billion views.
Color choices: The dashboard used a dark background (near-black) with data encoded in circles sized by case count and colored in shades of red. The effect was dramatic: glowing red dots on a dark map, growing larger and more intense as case counts rose. The dashboard also used white text and UI elements against the dark background for contrast.
What this communicated: The combination of red (semantic association: danger, urgency, alarm) and a dark background (semantic association: gravity, crisis) created an emotional impression of severity. This was arguably appropriate — COVID-19 was a severe public health crisis. The red sequential palette on dark background ensured high visual contrast and readability.
What this got right from a color science perspective:
- The sequential red palette used luminance to encode magnitude: more cases meant darker, more saturated red. The ordering was clear.
- The dark background provided high contrast for the red data elements, ensuring readability across devices.
- The single-hue approach (shades of red only) avoided categorical color confusion — the map was about one variable (case count), and it used one color dimension.
What raised questions:
- The red-on-dark aesthetic was viscerally alarming. Some critics argued that the dashboard "looked like a war map" and contributed to panic rather than informed decision-making. This is not a color-science critique — the palette was technically sound — but a semantic color critique: the choice to use red (danger) on black (gravity) was an editorial decision that amplified urgency.
- The circle sizing introduced a confounding encoding: viewers had to decode both circle area and circle color, which — as we learned in Chapter 2 — involves two channels with different perceptual accuracy. The color encoding was clear; the area encoding was less so, since area perception follows a power law with exponent ~0.7.
The New York Times: Discipline and Restraint
The New York Times developed a suite of COVID visualizations that evolved over the course of the pandemic. Their approach reflected a design philosophy of restraint and precision.
Color choices for case maps: The Times used a sequential red palette for county-level case rate maps — pale pink for low rates, through medium reds, to dark burgundy for high rates. The palette was perceptually well-ordered: luminance decreased monotonically from low to high case rates. The background was white or light gray, providing clean contrast.
Color choices for vaccination maps: As the pandemic shifted from crisis to vaccination campaign, the Times used a separate sequential palette — typically a blue-green or teal palette — for vaccination rate maps. This was a deliberate semantic choice: red for the disease (danger, threat), blue-green for the vaccine (calm, progress, hope). The shift in hue signaled a shift in narrative from alarm to recovery.
What this got right:
- Consistent semantic association: red consistently meant "cases" or "danger" across all pandemic-related graphics. Blue-green consistently meant "vaccination" or "progress." A regular reader built up a color vocabulary over months.
- Sufficient granularity: the Times typically used 6 to 8 color steps, providing enough resolution to see county-level differences without overwhelming the viewer.
- The luminance gradient was strong in all palettes, ensuring grayscale readability (important for the print edition) and colorblind accessibility.
What raised questions:
- The choice of breakpoints — where one color step ended and the next began — encoded editorial judgment about what constituted "low," "medium," and "high" case rates. These breakpoints shifted over the course of the pandemic as baseline case rates changed. A county that was dark red in April 2020 might have been pale pink by January 2022 standards, even at the same absolute case rate. The color thus communicated relative severity within a shifting frame, not absolute risk.
- During the Delta and Omicron waves, case rates were so high that many counties were in the top one or two color categories, creating a map that was nearly uniformly dark. The sequential palette "topped out," and within-category variation was hidden. This is a technical limitation of discrete sequential palettes: when the data range exceeds what the palette was designed for, the top end loses resolution.
The Traffic-Light Problem: Government Risk Levels
Many national and local governments adopted traffic-light color schemes (green, yellow, red — sometimes with additional levels like orange and purple) to communicate community risk levels. The United States CDC, the United Kingdom government, and numerous state and county health departments used variations of this approach.
What seemed intuitive: The traffic-light metaphor is deeply embedded in daily life. Green means go, safe, proceed. Yellow means caution. Red means stop, danger. Using this for community risk levels seemed natural and immediately understandable.
What went wrong from a color science perspective:
Problem 1: Red-green colorblind inaccessibility. The most critical visual distinction — between "safe" (green) and "dangerous" (red) — was the one that approximately 8% of male viewers could not make. A viewer with deuteranopia looking at a county-level risk map saw green and red as similar olive-brown tones, rendering the most important information invisible. This is exactly the failure mode described in Section 3.5 of this chapter.
Some jurisdictions addressed this by adding pattern fills, text labels, or icons to the color categories. Others did not, leaving colorblind viewers to rely on geographic knowledge or text-based alternative formats.
Problem 2: Arbitrary thresholds creating false certainty. A traffic-light scheme with three (or four, or five) color categories imposes sharp boundaries on continuous data. A county with 99 cases per 100K might be yellow ("moderate risk") while a neighboring county with 101 cases per 100K is red ("high risk"). The two-case difference produces a dramatic visual contrast — a full color-category shift — that implies a meaningful distinction where none exists.
This is the discrete palette problem discussed in Section 3.3: when you convert continuous data to a small number of color categories, the boundaries between categories become the most visually prominent features of the map. Viewers perceive the boundaries as real distinctions in the data, when in fact they are artifacts of the classification scheme. The actual risk difference between 99 and 101 cases per 100K is trivial. The visual difference is the entire distance between yellow and red.
Problem 3: Green as "safe" when no level was safe. During high-transmission periods, some jurisdictions had no counties in the "green" category, but the color scheme still implied that green was a possibility. Conversely, during low-transmission periods, some jurisdictions had all counties in green, which communicated "the pandemic is over" when public health officials were trying to maintain vigilance. The semantic weight of "green = safe" carried implications beyond what the data supported.
Problem 4: Inconsistent thresholds across jurisdictions. Different health departments set different thresholds for the same color categories. "Red" might mean >100 cases/100K in one state and >200 cases/100K in another. A viewer comparing across jurisdictions saw the same color representing different data — a violation of the consistency principle described in Section 3.6.5.
Vaccination Progress: The Problem of Optimistic Green
As vaccination campaigns accelerated in 2021, many dashboards adopted green-themed palettes for vaccination rate maps. Darker green meant higher vaccination rates. This leveraged the semantic association of green with "good," "healthy," and "progress."
The problem with green-for-vaccination: The use of green for vaccination rates created an implicit value judgment — green means good, so more vaccination is good. This is a scientifically defensible position, but encoding it into the color scheme meant that the map was not merely showing data; it was advocating a position. A viewer skeptical of vaccination saw a map that appeared to be editorial rather than informational.
More practically, using green as the high end of a sequential palette and leaving low-vaccination areas in yellow or pale tones visually minimized the areas with low vaccination rates. Because green (saturated, dark) is more visually prominent than pale yellow (desaturated, light), the vaccinated areas dominated the visual impression. A viewer glancing at the map might perceive "mostly green" and conclude that vaccination was progressing well, even if significant under-vaccinated pockets existed.
An alternative approach: Some organizations used a diverging palette centered on a target threshold (for example, the estimated herd-immunity threshold). Areas below the threshold were one hue; areas above were another. This encoding communicated not just the level but the relationship to a goal, making the "good enough vs. not yet" distinction explicit without relying on the semantic weight of green.
Ethical Dimensions
The COVID dashboard color story raises fundamental questions about the ethics of color choices in high-stakes visualization.
Color as editorial voice. Every color choice on a pandemic dashboard was an editorial decision. Red for cases amplified urgency. Green for vaccinations communicated approval. Muted, neutral palettes communicated detachment. There is no "neutral" color choice — every palette carries associations that influence the viewer's emotional response. This does not mean that emotional influence is always wrong (alarming colors during a genuine crisis may be appropriate), but it means that designers must be aware of and accountable for the emotional weight of their color choices.
Thresholds as policy communication. When a government assigns colors to risk categories, the color boundaries become policy communications. "You are in a red zone" is not just a data statement — it triggers behavioral responses (stay home, cancel events, close schools). The color choice is not merely a visualization decision; it is a public health intervention. The precision and honesty of the color-to-data mapping therefore carries consequences beyond aesthetics.
Accessibility as equity. During the pandemic, colorblind viewers who could not distinguish red from green on risk maps were at a real informational disadvantage. If your neighbor could look at the map and immediately assess their county's risk level, but you could not because of a curable design flaw, that is an equity issue. The tools to test for colorblind accessibility existed and were free. Failing to use them during a public health crisis was not a minor oversight.
Normalization through palette choice. When case rates rose so high that the palette "topped out" — when most of the map was in the darkest color category — the loss of resolution communicated a kind of normalization. If everything is dark red, nothing looks exceptional. The palette's inability to show variation at the high end effectively communicated "all high" when the truth was "high, higher, and highest." Some organizations responded by rescaling their palettes, which introduced a different problem: the same color meant different things at different times, making temporal comparison unreliable.
Principles Demonstrated
This case study illustrates several principles from Chapter 3 in a real-world, high-stakes context:
| Principle | How It Appeared in COVID Dashboards |
|---|---|
| Palette type must match data type | Sequential palettes for case counts (one direction); diverging palettes for change-from-baseline metrics |
| Luminance carries the magnitude signal | The best dashboards ensured that luminance alone conveyed case severity, surviving colorblind viewing and grayscale |
| Red-green is the most dangerous pair | Traffic-light schemes failed for ~8% of male viewers at a time when accurate risk perception was critical |
| Discrete palettes create false boundaries | Arbitrary thresholds made 99 vs. 101 cases/100K look like a categorical distinction |
| Semantic color is editorial | Red for cases, green for vaccines — every palette choice carried emotional and political weight |
| Consistency matters | Different jurisdictions using different thresholds for the same colors made cross-boundary comparison unreliable |
| Color choices have ethical consequences | In a pandemic, inaccessible or misleading color encoding was not an aesthetic failure — it was a public health failure |
What Would You Have Done?
Imagine you are the lead designer for a new national COVID-19 dashboard in March 2020. You know the principles from this chapter. You also know that: - Your audience includes the general public, policymakers, journalists, and healthcare workers. - Approximately 8% of male viewers have color vision deficiency. - The dashboard will be viewed on phones, desktops, and projected in briefing rooms. - The data will be reported by media outlets that will screenshot your maps and add their own commentary. - Emotional response to your color choices will influence public behavior.
Design your color scheme. Specify: palette type, specific colors, number of steps, colorblind safeguards, and redundant encoding strategies. Justify each choice using the principles from this chapter.
Discussion Questions
-
The Semantic Color Dilemma: The Johns Hopkins dashboard used red to signal severity. Some argued this amplified fear. Others argued it was honest — the crisis was severe. Where do you draw the line between "honestly representing gravity" and "amplifying alarm through design"? Can this line even be drawn objectively?
-
Traffic Lights and False Simplicity: The traffic-light metaphor (red/yellow/green) is intuitive but imposes discrete categories on continuous data. Is there ever a situation where a traffic-light color scheme is the right choice for public health communication? If so, what safeguards would you add?
-
Temporal Rescaling: Some dashboards rescaled their color palettes as case rates rose, so that what was "dark red" in March 2020 became "pale pink" by January 2022. Others kept fixed scales. What are the trade-offs of each approach? Which serves the public better?
-
Accessibility Under Pressure: During a fast-moving crisis, design teams are under pressure to ship quickly. Testing for colorblind accessibility takes time. How would you argue for accessibility testing when your manager says "we need this live by tomorrow"?
-
Color and Trust: Some viewers distrusted COVID dashboards, perceiving them as tools of persuasion rather than information. To what extent did color choices (red for alarm, green for compliance) contribute to this distrust? How would you design a dashboard that maximizes perceived objectivity?
-
The Resolution Question: A 3-step color scheme (green/yellow/red) is simple but hides variation. A 9-step scheme reveals nuance but may confuse non-expert viewers. How do you decide the right number of color steps for a general-public audience? Does the answer change in a crisis?
Return to the chapter text or proceed to the exercises.