Case Study 2: The Challenger Disaster and the Chart That Did Not Save the Launch
The engineers who warned against launching the Space Shuttle Challenger had the right data. They did not have the right chart. Seven astronauts died. This is what visualization ethics looks like on its highest-stakes day.
The Situation
On the evening of January 27, 1986, a telephone conference began between engineers at Morton Thiokol in Utah and managers at NASA's Marshall Space Flight Center in Alabama. The topic was the weather. Kennedy Space Center in Florida had recorded overnight lows near 18 degrees Fahrenheit, and the forecast for the scheduled launch time the next morning was around 29 degrees Fahrenheit — the coldest launch temperature in the Space Shuttle program's history by a wide margin.
The engineers had a problem. The Solid Rocket Boosters (SRBs) attached to the shuttle were manufactured by Morton Thiokol and sealed with rubber O-rings designed to prevent hot combustion gases from escaping through the joints between segments. In warm weather, the O-rings had performed reliably. In cold weather, engineers had observed signs of erosion and partial failure on previous flights. The rubber became stiff and less resilient at low temperatures, taking longer to seal the gap as the booster pressurized. If the O-rings failed to seal in time, the escaping gas could burn through surrounding structures and — in a worst case — cause a catastrophic failure of the booster itself.
Roger Boisjoly, a Thiokol engineer who had studied the O-ring problem in detail, was adamant: the launch should be postponed. So were several of his colleagues. They had seen enough evidence on previous flights to believe that launching at 29 degrees was outside the temperature range where the O-rings could be trusted to seal. They had hours to make their case to NASA management, and they had all the data they needed to support it.
They lost the argument. The launch proceeded at 11:38 AM on January 28, 1986. Seventy-three seconds after liftoff, the Space Shuttle Challenger broke apart over the Atlantic Ocean. Commander Francis "Dick" Scobee, Pilot Michael Smith, Mission Specialists Judith Resnik, Ronald McNair, and Ellison Onizuka, Payload Specialist Gregory Jarvis, and teacher-in-space Christa McAuliffe were killed.
The Rogers Commission, convened to investigate the accident, concluded that the immediate cause was exactly what the Thiokol engineers had feared: a failure of the O-rings on the right solid rocket booster, attributable to the cold launch temperature. The engineers had been correct about the technical facts. They had the data. They had the expertise. They had the warning. And their warning did not get through.
Edward Tufte examined the charts the engineers had prepared for that overnight teleconference — the charts meant to convey the danger — and concluded that the visualization itself was a substantial part of the failure. His analysis, first published in Visual Explanations in 1997 and expanded in later editions, argued that the chart Thiokol presented did not make the case the data actually supported. The engineers' information was essentially correct, but their visual presentation failed to communicate it to the decision-makers who needed to see the pattern instantly.
This case is fundamentally different from Case Study 1. There was no intent to mislead. There were no partisan stakes. There was no cable news sensationalism. There were engineers with real expertise trying to prevent a disaster, and their charts failed them at the worst possible moment. The lesson is darker and more immediately applicable to every practicing analyst: honesty is not the same as clarity, and a chart that fails to communicate a true and urgent message is a form of visualization failure that carries real consequences.
The Data
The relevant data was the complete record of O-ring performance on prior Space Shuttle flights. For each flight, engineers knew:
- The ambient temperature at launch (degrees Fahrenheit)
- The number of O-ring incidents observed on that flight (erosion or blow-by events, on either the primary or secondary O-rings, across all the field joints on both SRBs)
- The severity of each incident
Across the prior 24 shuttle flights, there had been O-ring incidents on seven flights and no O-ring incidents on the other seventeen. The incidents were not uniformly distributed by temperature. The two flights launched at the coldest temperatures in the history of the program to that point — STS-51C at 53 degrees Fahrenheit and STS-41B at 57 degrees Fahrenheit — had both shown significant O-ring damage. The flight at the warmest temperature in the data set had shown no damage at all. And most importantly, the forecasted launch temperature for Challenger — 29 degrees Fahrenheit — was 24 degrees colder than the coldest previous launch. The shuttle had never launched in conditions anywhere close to what the engineers were being asked to approve.
The pattern the engineers believed was real: O-ring performance degraded with decreasing temperature, the effect was visible in the data, and extrapolation to 29 degrees was extrapolation well outside any prior experience.
The data supported that claim. The question was whether a chart could be designed that made the pattern visible to people who had to make a decision in a few hours, under pressure, from a teleconference line.
The Visualization
The charts that Thiokol faxed to NASA on the evening of January 27 have been reproduced in multiple sources — most extensively by Tufte in Visual Explanations — and they are remarkable for what they did not show.
The primary chart Thiokol presented listed the flights with O-ring problems. It showed the flight number, the temperature, and a rocket icon with notations about which O-ring had experienced which kind of damage. The flights were ordered chronologically, by flight number — not by temperature. The chart included only the flights that had shown problems. The seventeen flights that had flown without O-ring incidents were not on the chart.
The effect of this design was to destroy the pattern the engineers were trying to show. Consider what a manager on the NASA end of the teleconference would have seen:
- A chart showing seven flights with O-ring damage
- Temperatures for those seven flights ranging from 53 to 75 degrees Fahrenheit
- The flights listed in chronological order, making it impossible to see any temperature relationship
- No data at all for the seventeen flights without damage
Read this list and try to find the pattern "O-ring damage is correlated with low temperature." You cannot — because the chart has been structured in a way that makes the pattern invisible. Damage has occurred at 75 degrees (warm) and at 53 degrees (cold). Without seeing the seventeen no-damage flights for context, the 75-degree data point suggests that damage can happen at any temperature. The temperature-dependence pattern — the central argument the engineers were trying to make — is not on the chart at all. It exists only in the engineers' heads, as a conclusion drawn from a full dataset they did not put on the page.
The chart that would have made the pattern visible is almost embarrassingly simple. Put all twenty-four flights on a scatter plot. X-axis: launch temperature. Y-axis: number of O-ring incidents. One dot per flight. No omissions, no chronological ordering, no rocket icons. Add a dashed vertical line at 29 degrees Fahrenheit — the forecasted launch temperature — to show the manager exactly how far outside the existing data range the proposed launch would be.
Tufte has reproduced this "corrected" chart multiple times. The pattern jumps out immediately. Low-temperature flights cluster toward higher incident counts. The single 75-degree flight with damage is visible as an outlier, not a refutation of the trend. The 29-degree launch forecast is shown in empty space, with no precedent on either side to support it. The manager, looking at this chart for three seconds, would have understood exactly what the engineers were trying to say.
The engineers did not draw this chart. They were under time pressure, working from a standard reporting format, using the charts they had prepared for earlier internal reviews. The chart that existed was the chart that was faxed. And the chart that was faxed did not make the case.
The Impact
The launch happened. The O-rings failed exactly as Boisjoly and his colleagues had feared. Seventy-three seconds after liftoff, the shuttle disintegrated. The seven crew members were killed.
The Rogers Commission report, issued in June 1986, identified the O-ring failure as the physical cause and the flawed decision process as the organizational cause. Richard Feynman's famous demonstration during the hearings — dunking a piece of O-ring rubber into a glass of ice water and showing that it lost its resilience — made the engineering point in a form that anyone could understand. Feynman was, in effect, doing with a physical prop what the Thiokol chart had failed to do with visualization: making the temperature-dependence pattern immediately visible.
In the years that followed, the Challenger accident became a standard case study in engineering ethics, organizational decision-making, and risk communication. Diane Vaughan's book The Challenger Launch Decision (1996) examined the "normalization of deviance" that had let NASA gradually accept O-ring damage as routine rather than alarming. Engineering schools incorporated the case into required ethics courses. NASA restructured its decision-making processes to give engineers more direct authority to veto launches on safety grounds.
Tufte's visualization-focused analysis added another dimension to the conversation. In Visual Explanations and in his public lectures, Tufte argued that the Thiokol engineers had the right conclusion but the wrong visualization — and that the wrong visualization had contributed to the wrong decision. The chart was not dishonest in the Fox News sense. Nobody at Thiokol was trying to hide the temperature relationship. But by omitting the successful flights, ordering by flight number, and relying on rocket icons rather than a clean scatter plot, the chart made a visible pattern invisible. The analytic content was right; the visual communication was wrong.
This is a specific kind of visualization failure that the conventional "don't lie with charts" framing does not quite capture. The Thiokol chart was technically honest. Every fact on it was correct. But it was ineffective — and in a context where a chart had to persuade decision-makers under time pressure, ineffective was indistinguishable from dishonest. The managers who approved the launch did not see the pattern. They might have seen it if the chart had been designed differently. The chart's failure to show what the data supported was, in its own way, a form of lying: the chart told a story the data did not tell, by omitting the evidence that would have told the other story.
Tufte's reframing has had lasting influence on how engineering teams think about risk communication. A clean scatter plot showing all data points is now standard in safety reviews. Charts that omit successful observations — "only show the failures" — are recognized as the kind of selective presentation that can produce exactly the outcome Challenger produced. The phrase "make the pattern visible" entered the vocabulary of engineering chart review.
Why It Failed: A Visualization Analysis
The Thiokol chart failed for reasons that connect directly to the principles this chapter has been building, but the failure was of a particular kind that deserves careful examination.
1. The chart committed passive cherry-picking. The engineers did not deliberately choose to hide the seventeen successful flights. They were following a reporting convention that documented "incidents of concern" rather than a comprehensive performance record. But the effect was indistinguishable from deliberate cherry-picking. By showing only the flights with O-ring damage, the chart stripped out the contrast that would have made the temperature relationship visible. The selection was editorial, even if it was not intentional.
2. The chart encoded the wrong variable as its organizing principle. The critical variable was temperature. The chart was organized by flight number. This single structural choice made the temperature pattern impossible to see. The lesson is not that chronological ordering is always wrong — for many questions, it is the right choice — but that the organizing principle of a chart should be the variable you want the viewer to reason about. If you want the viewer to see a temperature relationship, the chart must be organized by temperature.
3. The chart relied on icons rather than encodings. Rocket-shaped icons decorated the Thiokol chart, but they did not encode quantitative information in a form the visual system could process pre-attentively. A small rocket icon with notes about "primary O-ring erosion" and "secondary O-ring blow-by" requires the reader to read the text and aggregate the meaning in working memory. A dot on a scatter plot requires nothing but glance-interpretation. The reliance on iconography sacrificed encoding accuracy for the illusion of technical seriousness.
4. The chart did not show the decision at stake. The proposed launch temperature — 29 degrees Fahrenheit — was not represented anywhere on the Thiokol chart. The viewer had no visual anchor for "this is what we are about to do, and this is how far it sits from any prior experience." A single dashed vertical line at 29 degrees would have made the extrapolation problem impossible to miss. Its absence meant that the managers had to do the extrapolation mentally, in real time, under pressure, during a teleconference.
5. Time pressure compounded all of the above. The engineers were making last-minute arguments against a scheduled launch. They were using the charts they already had. They did not have hours to redraw the temperature-vs-incidents scatter plot from scratch and fax a revised version. The chart that existed was the chart they used. This is a recurring pattern in visualization failures: the chart-making decision often happens long before the communication moment, and the chart-in-hand is the chart you get to use.
The Difference Between Dishonest and Ineffective
Case Study 1 (the Fox News chart) and Case Study 2 (the Thiokol chart) are both visualization failures, but they are not the same kind of failure. Understanding the difference is essential.
The Fox News chart was dishonest: the visual encoding told a story the data did not support. A casual viewer was led to a false impression — that unemployment had dropped dramatically — by a truncated y-axis. The distortion was active. The chart made claims about the data that were not in the data.
The Thiokol chart was ineffective: the visual encoding failed to tell a story the data did support. A decision-maker was left without visual access to a pattern — the temperature dependence of O-ring damage — that was present in the data but absent from the chart. The distortion was passive. The chart failed to make claims that should have been made.
These two failure modes are both ethical failures, but they point toward different ethical obligations. The anti-distortion ethic says: do not mislead viewers with false visual impressions. The clarity ethic says: when the data supports a conclusion that viewers need to reach, the chart must make that conclusion visible. A chart maker who meets the first obligation but fails the second has not fulfilled the ethical duty of visualization. The Thiokol chart was not a lie. It was a silence at a moment when speech was owed.
The clarity ethic is harder to enforce and easier to rationalize away. "We showed the data, it is not our fault if they did not see the pattern" is a defense. It is not a complete defense. If you have data that supports an urgent conclusion and you have control over how that data is presented, the presentation is your responsibility. Honest charts are necessary. They are not sufficient.
Lessons for Modern Practice
Every chart maker will work at some point on data that matters — where the viewer's decision will affect outcomes beyond the chart itself. The Challenger case is extreme, but the pattern is general.
Show all the data when the pattern is the point. If you are trying to convince a viewer that a relationship exists, omitting the cases that do not support the relationship undermines the argument — even if the omission is passive and the motive is innocent. The full dataset contains the contrast that makes the pattern visible. Show it.
Organize the chart by the variable you want the viewer to reason about. If the question is about temperature dependence, the x-axis is temperature. If the question is about time trends, the x-axis is time. If the question is about category comparison, the organizing variable is the category. The chart's structure should embody the question you are asking.
Annotate the decision point. If a chart is being used to inform a decision — launch or delay, approve or reject, proceed or stop — mark the decision scenario on the chart. A vertical line for "the temperature we are proposing," a horizontal line for "the threshold we care about," a highlighted region for "the conditions under discussion." Do not make the viewer mentally overlay the decision onto the data.
Do not substitute decoration for encoding. Rocket icons, logos, photographs, 3D effects, decorative typography — these can make a chart look professional and serious, but they do not encode information. When decision-makers must read a chart under time pressure, they need visual encodings, not visual decoration. Every icon that is not encoding data is an icon that is stealing attention from the encodings that are.
Make the patterns visible to the visual system, not just the analytic mind. The engineers at Thiokol had analyzed the data correctly. Their minds knew the pattern. But the chart did not put the pattern in front of the manager's visual system, and the manager made a decision based on what was in front of his eyes. A chart is a piece of external cognition. It is the place where the analyst's understanding becomes available to a reader who does not yet have it. When the chart fails to transfer that understanding, the understanding might as well not exist.
Prepare your charts before the high-stakes moment. The Thiokol engineers did not have time to redesign their charts on the evening of January 27. The chart they sent to NASA was the chart they had prepared weeks earlier for a different audience and a different question. For high-stakes communications, the chart-design decision happens long before the communication. Build the charts you would want to show at the most important moment, and have them ready.
Discussion Questions
-
On the difference between lying and failing to communicate. Do you think the Thiokol engineers' chart was dishonest in any meaningful sense? If not, does that mean the chart is ethically acceptable? How do you weigh the obligation to "not mislead" against the obligation to "make the truth visible"?
-
On selective presentation. The Thiokol chart omitted the seventeen flights that had flown without O-ring incidents. This omission was passive — a reporting convention, not a deliberate choice to hide data. Does the absence of intent matter when the effect is to hide the pattern? How do you prevent passive cherry-picking in your own work?
-
On time pressure. The Thiokol engineers had hours, not weeks, to make their case. They used the charts they had. How should visualization ethics adapt to time-constrained decision contexts? What can an analyst prepare in advance to be ready for a high-stakes moment?
-
On the audience's ability to see patterns. The NASA managers were not statisticians. They were being asked to synthesize a temperature-vs-damage relationship from a chart that did not show the temperature axis. What assumptions should you make about your audience's ability to do mental visualization work? When is it safe to ask the viewer to "just look at the numbers"?
-
On when you have data that could prevent harm. If you have analyzed data that supports an urgent conclusion — safety, public health, financial risk — and you have control over how the data is visualized, what obligations follow? Is it enough to "not lie," or does the clarity ethic require you to actively make the pattern visible?
-
On the connection to your own practice. Think of the last chart you made where the audience had a decision to make based on what you showed. Did the chart put the decision-relevant pattern in front of the viewer's visual system, or did it require the viewer to infer the pattern from a structure that did not support the inference? How would you redesign it now?
Seven people died on January 28, 1986, for many reasons — organizational pressure, schedule stress, communication failure, engineering tradeoffs, and cold weather. The chart was not the only cause. But the chart was a cause. When the data supported an urgent warning and the chart failed to convey that warning, the failure of visualization had a cost. The responsibility of the chart maker is not only to avoid lies but to avoid silences when the data speaks. Visualization ethics is a practice of voice, not just restraint.