Case Study 16.2: The Redistricting Visualization Problem — How Maps Shaped the Gerrymandering Debate

Background

Gerrymandering — the drawing of electoral district boundaries for political advantage — has been debated in American politics since Elbridge Gerry signed an 1812 redistricting map containing a salamander-shaped district. What has changed dramatically in the twenty-first century is the role of data visualization in both the practice and the adjudication of redistricting.

Two developments converged after the 2010 census to make visualization central to the gerrymandering debate. First, the redistricting process itself became increasingly data-intensive: mapping software, GIS tools, and voter-level partisan data enabled more precise district drawing than had been possible with manual drafting. Second, the legal challenges to gerrymandered maps increasingly relied on visual and statistical evidence that courts had to interpret — and visualizations became a central language of that legal dispute.

The Pennsylvania Case Study

Pennsylvania's congressional district map, drawn by the Republican-controlled legislature after the 2010 census, became one of the most-litigated redistricting maps of the decade. The district boundaries had shapes that suggested deliberate manipulation: the 7th Congressional District (nicknamed the "Goofy Kicking Donald Duck" by critics) had a highly convoluted boundary that appeared to connect Republican-leaning communities while carefully avoiding Democratic-leaning ones.

The visual evidence — the shape of the district — was compelling to lay observers. But legal adjudication required more than visual impression. Courts needed analytical frameworks for distinguishing natural political geography from deliberate partisan manipulation.

Three visualization and measurement approaches became central to the litigation:

Efficiency gap analysis. Developed by Nicholas Stephanopoulos and Eric McGhee, the efficiency gap measures how many votes each party "wastes" — votes beyond what's needed to win, or votes cast for a losing candidate — relative to the total votes cast. A large efficiency gap indicates that the map consistently wastes more votes for one party than the other, suggesting systematic manipulation. The efficiency gap is a number, not a visual, but it is frequently displayed in charts showing state-by-state efficiency gaps across election cycles.

Ensemble comparison. Redistricting analysts built algorithms that generate thousands or millions of random valid redistricting maps (meeting legal criteria for compactness, contiguity, and population balance). They then showed that the enacted map produced partisan outcomes in the extreme tail of this distribution — that virtually no randomly drawn map would have produced as skewed an outcome as the enacted one. This analysis was visualized as a histogram showing the distribution of Democratic seat shares across randomly drawn maps, with the enacted map's outcome shown as a point far in the tail.

Precinct-level flow maps. Geographic visualization showed how district boundaries were drawn relative to underlying partisan geography — where boundaries took sharp turns to include or exclude specific precincts. These maps made visible to judges the choices embedded in the district lines.

What the Visualization Debates Revealed

Maps are arguments. The Pennsylvania litigation demonstrated that showing a district's shape is itself an argumentative act. Showing the 7th District in isolation, labeled "Goofy Kicking Donald Duck," makes an implicit argument about its legitimacy. Showing it in the context of natural geographic and political features might make it look less irregular. The choice of what to show and how to frame it is not neutral.

Statistical visualizations require explanation. The ensemble comparison histogram was among the most sophisticated political data visualizations ever submitted to a court. Judges had to understand that the histogram showed randomly generated maps, that the enacted map was an outlier, and that "outlier" had a technical meaning that implied intentional rather than accidental manipulation. Post-case interviews with judges suggested that grasping this concept required extensive briefing — the visualization alone was insufficient.

Visual complexity has political costs. The efficiency gap and ensemble methods are technically sophisticated but visually complex. Critics of these methods argued that their complexity made them opaque to the public and to judges — that they were "black box" analyses that produced a number without making the analytical logic transparent. This critique was sometimes deployed as a litigation strategy: if you can make the analytical method seem too complex to trust, you can blunt its evidential impact.

The partisan nature of visualization choices. Both sides in redistricting litigation produce their own visualizations. Democratic challengers showed the "Goofy" districts; Republican defenders showed alternative visualizations depicting the maps as compact and compliant. Expert witnesses on both sides were sophisticated data analysts producing technically sound but strategically selected visualizations.

In Rucho v. Common Cause (2019), the U.S. Supreme Court ruled 5-4 that federal courts had no authority to adjudicate partisan gerrymandering claims — not because gerrymandering wasn't a problem but because there was no judicially manageable standard for identifying when it had crossed a constitutional line. The majority rejected both the efficiency gap and the ensemble comparison methods as providing a clear enough threshold.

The dissent, written by Justice Elena Kagan, extensively cited the visual and statistical evidence — including maps showing district shapes and ensemble comparison histograms. Kagan argued that the evidence was clear and that the majority was declining to act on clear evidence for political reasons.

What Rucho revealed is that visualization and statistical evidence, however sophisticated, operate within legal and political frameworks that determine their admissibility and weight. The evidence that persuaded political scientists did not persuade a court majority. The gap between scientific consensus and legal adjudication is not simply a visualization problem — it is a fundamental question about who gets to decide what evidence means.

Implications for Political Analysts

Visualization is advocacy. Every choice in a political visualization — what to show, how to label it, what comparison to draw — has argumentative implications. Analysts working in legal or public advocacy contexts need to be especially attentive to the ways their visualization choices embed arguments.

Understanding your audience's decision framework matters. The ensemble comparison method failed to move the Rucho majority not because it was statistically weak but because the majority didn't accept the premise that federal courts should apply such methods to redistricting. Understanding what kind of evidence your audience's decision framework will accept is as important as the technical quality of your analysis.

Complexity creates vulnerability. Statistical sophistication in visualization creates two risks: the audience won't understand it, or opponents will argue that complexity equals opacity. Pairing sophisticated analysis with clear, simple visual summaries that communicate the key finding accessibly is better practice than presenting technical output and hoping audiences will understand it.

Geographic visualization has inherent political content. Maps of political boundaries are not neutral technical documents; they embed the outcomes of political processes. Using such maps in analysis requires acknowledging the political choices they encode.

Discussion Questions

  1. Should the "Goofy Kicking Donald Duck" visual shape argument (the district looks gerrymandered because of its irregular shape) be sufficient evidence of illegal gerrymandering? What are the limits of this visual argument?

  2. The ensemble comparison method generates thousands of random maps to establish a baseline. Is this a valid approach to identifying intentional gerrymandering? What assumptions does it require?

  3. If you were a data visualization expert asked to present the Pennsylvania redistricting evidence to a general audience (not a court), what three visualizations would you choose? What would each one show and why?

  4. The Rucho decision moved partisan gerrymandering challenges from federal courts to state courts and legislatures. How should political analysts adjust their visualization and communication strategies when the relevant audience shifts from judges to state legislators or to voters in initiative campaigns?