Case Study 2: When Charts Lie — The Cost of Misleading Visualization
Real examples from news, politics, and business where chart design choices — intentional or careless — led to public misunderstanding, bad decisions, and eroded trust.
Why This Matters
If Chapter 1 argues that visualization is a cognitive amplifier, this case study examines the dark side of that power. A tool that amplifies perception can also amplify misperception. A chart that communicates instantly and pre-attentively can communicate the wrong thing just as instantly and pre-attentively.
The examples in this case study are not hypothetical. They are real charts that appeared in real media outlets, Congressional hearings, corporate reports, and social media posts. They reached millions of people. In several cases, they influenced decisions with tangible consequences.
The goal is not to make you cynical about data visualization — it is to make you literate. A visually literate person can look at a chart and ask: What choices did the designer make? How do those choices shape the message? What would the data look like if the choices were different?
Case 1: The Truncated Axis in Political Media
The Chart
In 2015, a major cable news network displayed a chart purporting to show the change in the number of people enrolled in a federal program over a five-year period. The numbers were approximately:
- Year 1: 6.0 million
- Year 5: 6.4 million
The y-axis of the bar chart started at 5.9 million. As a result, the bar for Year 5 appeared roughly seven times taller than the bar for Year 1 — suggesting a massive increase.
The Reality
The actual increase was approximately 6.7% over five years — roughly 1.3% per year. On a bar chart with a y-axis starting at zero, the two bars would be virtually indistinguishable in height. The visual difference between 6.0 million and 6.4 million, when the axis spans 0 to 6.4 million, is a sliver at the top of two nearly identical bars.
By truncating the axis — starting it at 5.9 million — the chart magnified the visual difference by a factor of roughly 100. A 6.7% increase was rendered as a visual difference that suggested a 700% increase.
Why It Matters
This is the most common and most insidious form of chart deception because it is technically accurate. Every data point on the chart is correct. The chart is not fabricated. But the visual impression it creates is grossly disproportionate to the underlying reality.
The human visual system processes the relative heights of bars pre-attentively. When the short bar is one-seventh the height of the tall bar, the viewer's immediate perception is "massive difference." The conscious mind can check the axis labels and recalibrate — but most viewers do not. They absorb the visual impression and move on.
In a political context, this technique can make a small change look like a crisis (when the change is in something the presenter opposes) or an insignificant change look impressive (when the presenter wants to take credit). The same data can be made to argue for or against a policy, depending solely on where the y-axis starts.
The Design Fix
For bar charts — where the viewer judges magnitude by comparing the full length of bars — the y-axis should almost always start at zero. This is one of the few near-absolute rules in data visualization. The reason is straightforward: bar length encodes quantity. If the bar for 6.4 million is not 6.7% taller than the bar for 6.0 million, the chart is lying about the magnitude of the difference.
For line charts, the rule is less strict. A line chart encodes trend through slope, not bar height. A truncated axis on a line chart can be appropriate when the goal is to show variation within a narrow range — as long as the axis range is clearly labeled and the viewer is not misled about absolute magnitudes.
Case 2: Cherry-Picked Time Frames in Climate Debate
The Chart
In 2012, a widely circulated blog post claimed that "global warming stopped in 1998." The accompanying chart showed global average temperature from 1998 to 2012, and the trend line was essentially flat — no statistically significant warming.
The chart was picked up by numerous media outlets and shared millions of times on social media. It was cited in political arguments against climate action.
The Reality
1998 was an exceptionally hot year — the hottest on record at the time — due to a powerful El Nino event. Starting a trend line at the peak of a short-term fluctuation and ending it just 14 years later mathematically guaranteed a flat or nearly flat trend. It is the statistical equivalent of standing on a hilltop and concluding that the mountain is flat because you cannot see the slope from where you stand.
When the same data is plotted from 1880 to the present — the full instrumental record — the long-term warming trend is unmistakable. The period 1998-2012, rather than being a "pause," is simply a short stretch where natural variability temporarily masked the ongoing upward trend. Every year since 2014 has been warmer than 1998.
Why It Matters
Cherry-picking time frames is particularly dangerous because it exploits a genuine mathematical reality: any trend, no matter how strong, will appear flat if you zoom in to a short enough window. Day-to-day stock market data is volatile even when the long-term trend is strongly upward. Month-to-month crime statistics fluctuate even when the multi-year trend is declining.
The technique works because most viewers assume the chart shows the full picture. They do not ask "Why does the chart start here?" or "What does the data look like before this window?" The designer's choice of time frame is invisible unless the viewer is trained to question it.
In the context of climate policy, the "global warming paused" narrative had real consequences. It was used to argue against emissions regulations, to question the scientific consensus, and to delay policy action. The misleading chart did not create the opposition to climate action, but it gave that opposition a veneer of data-driven credibility.
The Design Fix
When showing time series data, default to showing the longest available time range. If you must show a subset, clearly explain why the subset was chosen and show the full range in a secondary chart or inset for context. Annotate the starting and ending points so viewers understand the framing.
More fundamentally, be suspicious of any time series chart that starts at a peak or trough. Ask: "Was the starting point chosen because it is meaningful, or because it produces the desired visual trend?"
Case 3: Area Distortion in Corporate Infographics
The Chart
A technology company's annual report included a graphic showing the growth of its user base over three years: 10 million users in Year 1, 20 million in Year 2, and 40 million in Year 3. Rather than using a bar chart, the designer represented each year's users with a circle (bubble) whose diameter was proportional to the number of users.
The Reality
When diameter doubles, area quadruples (because area = pi * r^2). The circle for Year 2 (20 million) had four times the area of the circle for Year 1 (10 million), not twice the area. The circle for Year 3 (40 million) had sixteen times the area of Year 1's circle. The visual impression was of explosive, exponential growth that far exceeded the (still impressive) actual growth rate.
This is not a subtle distortion. A viewer who intuitively compares the sizes of the circles — which is exactly what the pre-attentive visual system does — perceives a growth trajectory that is dramatically steeper than reality.
Why It Matters
Area distortion is pervasive in infographics, corporate communications, and news media. It appears whenever a designer uses two-dimensional objects (circles, icons, pictures) to represent one-dimensional quantities. The problem is compounded when the objects are three-dimensional (spheres, cubes, 3D bars), because volume scales with the cube of the linear dimension.
The distortion can go in either direction. If a designer scales area proportional to the value (the mathematically correct approach), the diameters will not scale proportionally, and large values can appear smaller than viewers expect. If the designer scales diameter proportional to the value (the intuitive but incorrect approach), areas will grow quadratically, making growth appear much more dramatic than it is.
Neither approach produces perfectly accurate perceptions, because human perception of area is inherently nonlinear — we tend to underestimate the area of larger shapes relative to smaller ones (Stevens' Power Law). This is one reason why position along a common scale (bar charts, dot plots) is almost always a better encoding than area.
The Design Fix
Avoid using area to encode precise quantities unless the context demands it (geographic data, bubble charts where relative size is approximate). When you must use area, scale the area proportionally to the value, not the diameter or radius. Label the values explicitly so viewers can verify the visual impression against the actual numbers.
Better yet, use a bar chart. Bars encode quantity with length along a common scale — the most accurately perceived visual encoding there is. A bar chart showing 10M, 20M, and 40M would still communicate impressive growth without the perceptual distortion.
Case 4: The Missing Context in Congressional Testimony
The Chart
In a 2015 U.S. Congressional hearing, a chart was presented showing two trend lines for a women's healthcare organization:
- An upward-sloping line representing the number of abortions performed
- A downward-sloping line representing cancer screening and prevention services
The two lines crossed dramatically, suggesting that the organization had shifted its focus from cancer prevention to abortions.
The Reality
The chart had multiple serious design problems:
No y-axis labels or scales. The two lines were plotted on different scales without any indication to the viewer. The abortion numbers were in the hundreds of thousands; the cancer screening numbers were in the millions. Plotting them on the same visual scale, without labeling the axes, created the false impression that the two services were similar in magnitude and that their "crossing" was meaningful.
Cherry-picked endpoints. The chart showed only two points — one at the beginning and one at the end of the period — connected by straight lines. The actual year-by-year data showed much more complex trajectories that the straight lines obscured.
Missing context. The decline in cancer screenings was partly due to changes in medical guidelines (which recommended less frequent screening for certain populations), not a choice by the organization to reduce services. The chart presented the data without this context, allowing viewers to draw a causal inference that the data did not support.
Why It Matters
This chart was shown during an official Congressional hearing — a context that lends credibility and authority to whatever is presented. It was broadcast on national television and shared widely on social media. Millions of people saw the chart and absorbed its visual argument (that the organization was prioritizing abortions over cancer care) without access to the information needed to evaluate that argument critically.
The chart was later fact-checked by multiple organizations and found to be deeply misleading. But by then, the visual impression had been formed. Research on misinformation consistently shows that corrections are less effective than the original false claim — and this is especially true for visual misinformation, because the visual impression is processed pre-attentively and lodges in memory more durably than a verbal correction.
The Design Fix
When plotting multiple variables on the same chart, use a dual-axis design with clearly labeled scales — or better yet, use two separate charts stacked vertically with aligned x-axes. Never plot lines on different scales without making the scale difference explicit.
Show all available data points, not just endpoints. Two points define any line, but real data has variability, trend changes, and noise that only multiple points can capture.
Provide context. If an external factor (like a change in medical guidelines) explains a trend, include that information in an annotation or footnote. A chart without context is an argument without evidence.
Patterns of Deception: A Taxonomy
These four cases illustrate patterns that recur across misleading visualizations:
1. Scale manipulation. Truncated axes, inconsistent scales, and suppressed baselines change the visual magnitude of differences. The numbers are technically correct, but the visual impression is disproportionate.
2. Selective framing. Cherry-picked time windows, omitted categories, and missing context create a narrative that the full data does not support. The viewer sees a subset and assumes it is the whole.
3. Encoding distortion. Area, volume, and 3D effects introduce nonlinear scaling that exaggerates or minimizes differences. The visual system interprets these encodings inaccurately, and the chart designer either does not know this or exploits it.
4. False comparison. Plotting unrelated variables on the same chart, using different scales without disclosure, or juxtaposing categories that are not comparable creates false visual associations.
These patterns are not always intentional. Many misleading charts are produced by well-meaning people using default software settings without understanding the visual consequences. The chart-creation tools are partly to blame — default behaviors in many software packages produce truncated axes, 3D effects, and other misleading design choices.
But intentionality does not determine impact. A misleading chart misleads whether the designer meant it to or not. Visual literacy — the ability to read charts critically, question design choices, and recognize manipulation — is a necessary skill for anyone who consumes or produces data visualization.
The Ethics of Visualization Design
These cases raise ethical questions that do not have easy answers:
Is omission the same as deception? A chart that shows accurate data but omits important context is not technically lying. But it may still mislead. Where is the line between selective presentation (which all communication requires — you cannot show everything) and deception?
Who is responsible? When a misleading chart goes viral, who bears responsibility — the designer, the publisher, the platform, or the viewer who does not read the axes? In a world where data visualization is increasingly automated and widely shared, the chain of responsibility is diffuse.
Does intent matter? Is a misleading chart produced through carelessness morally different from one produced through deliberate manipulation? The viewer is equally misled in both cases. But our ethical intuitions treat intentional deception differently from negligent error.
What is the role of the chart designer? If your manager asks you to "make the numbers look good" — to truncate an axis, cherry-pick a time frame, or use a 3D effect that exaggerates growth — what is your obligation? The chapter argues that every chart is an argument. Do chart designers have an obligation to make honest arguments, even when the person commissioning the chart wants something else?
These questions have no universal answers. But they are questions that every data visualization practitioner must confront. The power to shape perception carries the responsibility to shape it honestly.
Discussion Questions
-
On detection. For each of the four cases in this study, how could a careful viewer have detected the deception without access to the underlying data? What visual cues or habits of questioning would help?
-
On defaults. Several of these cases involved chart-creation software producing misleading defaults (auto-scaled axes, 3D effects). Should software companies bear responsibility for the misleading charts their tools produce? Should tools make it harder to create misleading charts?
-
On corrections. Research shows that visual misinformation is harder to correct than verbal misinformation because visual impressions are processed pre-attentively and lodge in memory. Given this, what responsibilities do media organizations have when they discover that a chart they published was misleading?
-
On professional ethics. You are a data analyst, and your manager asks you to create a chart that, while technically accurate, uses a truncated axis to make a small improvement look dramatic for an investor presentation. What do you do? What arguments would you make to your manager?
-
On visual literacy. Should data visualization literacy — the ability to read and critically evaluate charts — be a standard part of education, like reading and writing literacy? Why or why not? At what level should it be taught?
-
On scale and impact. In the age of social media, a misleading chart can reach millions of people in hours. Has the ethical obligation of chart designers changed in the digital era compared to the era of print? If so, how?
The examples in this case study are not edge cases. Misleading charts appear every day in news, social media, corporate communications, and government proceedings. The question is not whether you will encounter them — you will. The question is whether you will recognize them when you do, and whether the charts you create will be honest.