Capstone 1 Grading Rubric: The Battleground State Audit

Overview

This rubric is used to evaluate student audits submitted for Capstone 1. The rubric is organized into eight assessment dimensions, each weighted according to its contribution to the overall analytical challenge of the capstone. Total points: 100.

Before grading begins, evaluators confirm the following threshold criteria. Documents that do not meet these thresholds receive an incomplete until remediated:

  • Minimum word count of 8,000 words (not counting tables and references)
  • All six audit questions explicitly addressed
  • Quality-weighted polling average shown with step-by-step calculation
  • At least three turnout scenarios modeled
  • Equity and representation audit section present and substantive
  • References section present

Dimension 1: Polling Analysis (22 points)

This dimension assesses the student's ability to evaluate, grade, and aggregate a polling dataset into a defensible quality-weighted average, and to extract meaningful trend and house-effect signals from the data.

1a. Poll Quality Assessment (8 points)

Score Description
7–8 Student applies a clearly articulated grading rubric to each poll, drawing explicitly on AAPOR Transparency Initiative standards and methodology criteria from Chapters 8 and 10. Grades are justified with specific references to what each poll did and did not disclose. Partisan-sponsor penalty is clearly explained and consistently applied. Evaluator can follow the grading logic for any individual poll without ambiguity.
5–6 Student applies quality grading with mostly clear criteria. One or two polls are graded without full justification, or the partisan penalty is inconsistently applied. Grading rubric is identifiable but not fully explicit.
3–4 Student categorizes polls as "better" or "worse" quality but does not apply a systematic rubric. AAPOR standards are mentioned but not applied poll-by-poll. Grading appears impressionistic rather than methodological.
1–2 Minimal quality assessment. Student notes that polls differ but does not grade them or explain quality criteria.
0 No poll quality assessment. Student reports topline numbers only.

1b. Quality-Weighted Polling Average Construction (10 points)

Score Description
9–10 Student constructs a fully documented quality-weighted polling average. The calculation table (analogous to Table 5 in the capstone document) shows: grade weight, recency multiplier, combined weight, and per-candidate topline for each poll. The weighted average is computed correctly from these inputs. The student's final average is internally consistent with their quality grades (polls given lower grades have lower weight).
7–8 The calculation is substantially correct. Minor computational errors (not exceeding ±0.3 percentage points in the final average) or minor inconsistencies between grade description and grade weight are present but do not undermine the overall methodology.
5–6 The student attempts a weighted average but the weighting scheme is partially specified. The recency adjustment is absent or inconsistently applied. The final average is plausible but the documentation does not allow the reader to fully reconstruct the calculation.
3–4 The student computes a simple (unweighted) polling average and labels it "quality-weighted," or the weighting scheme is not documented.
1–2 The student reports a polling average derived from a source other than their own calculation, or the calculation shown does not match the reported average.
0 No polling average constructed.

1c. Trend Analysis and House Effect Identification (4 points)

Score Description
4 Student identifies meaningful trend patterns in the polling data (e.g., tightening during a specific time window) and links them to plausible causal factors. House effects for pollsters with multiple surveys are estimated and reported. The analysis distinguishes between signal and noise in the poll-to-poll movement.
2–3 Student describes trend patterns but either does not estimate house effects or links trend changes to causal factors without adequate evidence.
1 Student notes that the polling changed over time but provides no structured trend analysis.
0 No trend analysis.

Dimension 2: Demographic and Electoral Geography Analysis (18 points)

This dimension assesses the student's ability to construct and interpret a county-level demographic and political history analysis, including analysis of key demographic subgroups and identification of the persuadable voter universe.

2a. County-Level Demographic and Political Structure (8 points)

Score Description
7–8 Student constructs a county-level table analogous to Table 2 in the capstone document, with registered voter counts, demographic composition, party registration breakdown, and at least two cycles of prior election results. The analysis contextualizes these figures within the state's broader political trajectory — not just describing demographics but explaining their electoral significance.
5–6 County-level table is present and mostly complete. Analysis connects demographics to political behavior but does not consistently explain the mechanism (e.g., why college attainment correlates with Democratic shift).
3–4 Student provides demographic context but the analysis is primarily descriptive. Political history is present but not systematically connected to the analytical framework.
1–2 Sparse demographic section. State-level aggregate figures are given without county breakdown.
0 No demographic analysis.

2b. Subgroup Analysis: Hispanic/Latino Electorate (4 points)

Score Description
4 Student analyzes the Hispanic/Latino electorate as a coalition of communities rather than a monolithic group. At minimum, the analysis distinguishes between different national-origin communities, registration rates, and turnout patterns. Ecological fallacy is avoided. Analytical limitations of surname-based estimates are acknowledged.
2–3 Student recognizes the heterogeneity of the Latino electorate but analysis remains largely at the aggregate level. One specific subgroup distinction is made but not fully developed.
1 Student treats Latino voters as a homogeneous bloc. No subgroup differentiation.
0 No Hispanic/Latino subgroup analysis.

2c. Swing Universe Identification and Counterfactual Analysis (6 points)

Score Description
5–6 Student identifies the persuadable voter universe by geography and subgroup, with approximate size estimates and geographic concentration. At least one counterfactual scenario ("what if X demographic group turned out 5% higher?") is worked through step-by-step with correct arithmetic, producing a net vote estimate and an assessment of its magnitude relative to likely margins.
3–4 Swing universe is identified. One counterfactual is attempted but the arithmetic is incomplete or the result is not connected back to overall margin implications.
1–2 Swing universe is described qualitatively without size estimates. No counterfactual analysis.
0 No swing universe or counterfactual analysis.

Dimension 3: Turnout Modeling and Scenario Analysis (16 points)

This dimension assesses the student's ability to construct a structured baseline turnout model and develop three internally consistent scenarios.

3a. Baseline Turnout Model (8 points)

Score Description
7–8 Student constructs a county-level baseline turnout projection (analogous to Table 3 in the capstone document) with explicit turnout rate assumptions for each county or region, a candidate vote share assumption for each, and a computed net vote total. Assumptions are justified with reference to historical election data and early voting trends. The baseline produces a total vote count and margin estimate internally consistent with the polling analysis.
5–6 Baseline model is present and mostly structured. One or two county-level assumptions are not justified or appear inconsistent with historical data. Internal consistency with polling analysis is approximate but not precise.
3–4 Student presents a turnout model at the state level only (aggregate turnout %) without county-level disaggregation. No explicit candidate share assumptions.
1–2 Student discusses turnout in qualitative terms without constructing a model.
0 No turnout model.

3b. Three-Scenario Analysis (6 points)

Score Description
5–6 Low, medium, and high turnout scenarios are clearly specified with different assumptions for overall turnout and/or county-level patterns. Each scenario produces a computable margin estimate. The student explains why each scenario is plausible (not just arbitrary) and identifies which assumptions drive the difference between scenarios. Results across scenarios are compared in a summary table.
3–4 Three scenarios are named and described, with approximately correct margin estimates for each. Assumptions driving the differences are partially specified. Summary comparison is present but incomplete.
1–2 Three scenarios are named but the student does not clearly specify what is different about each or produce margin estimates.
0 Fewer than three scenarios, or no scenario analysis.

3c. Early Voting and Registration Change Analysis (2 points)

Score Description
2 Student incorporates early voting data and registration changes since the last comparable election into the turnout model. Caveats about the limitations of early vote party-registration data (it is not a vote tally) are explicitly noted.
1 Early voting is mentioned but not integrated into the model. Or caveats are absent.
0 No early voting or registration change analysis.

Dimension 4: Media and Advertising Analysis (12 points)

This dimension assesses the student's ability to analyze campaign advertising strategy, media framing, and fact-check patterns.

4a. Advertising Spend and Geographic Distribution (4 points)

Score Description
4 Student presents advertising spending data organized by entity (candidate, allied PAC, outside groups) and by medium. Geographic concentration of broadcast spending is analyzed and connected to the campaign strategies identified in the demographic section. The relationship between outside spending patterns and observable developments in the race (e.g., tightening in specific counties) is discussed.
2–3 Advertising spending is documented but geographic analysis is limited. The connection to campaign strategy is noted but not developed.
1 Basic spending figures are reported without analysis.
0 No advertising analysis.

4b. Message Analysis (5 points)

Score Description
5 Student identifies each campaign's core narrative and analyzes the strategic logic behind it: What audience does it target? What are the candidate's vulnerabilities it is designed to address? How does it interact with the demographic and geographic analysis? Student distinguishes between message description and message analysis, consistently providing the latter.
3–4 Core narratives are identified and partially analyzed. Analysis of strategic logic is present for at least one campaign's messaging but incomplete for the other.
1–2 Messaging is summarized but not analyzed. Student describes what ads say without explaining why campaigns are saying it.
0 No message analysis.

4c. Fact-Check Tracker and Media Framing (3 points)

Score Description
3 Student identifies at least two claims from the race that have been subject to public fact-checking, reports the fact-checker's rating with appropriate sourcing, and connects the fact-check findings to campaign behavior (e.g., did the campaign modify the claim?). At least one observation about how different types of media outlets are framing the race differently is present.
2 Fact-check tracker is present with at least one example. Framing analysis is cursory.
1 One fact-check is mentioned without full documentation. No framing analysis.
0 No fact-check or framing analysis.

Dimension 5: Campaign Finance Analysis (10 points)

This dimension assesses the student's ability to analyze campaign finance data as a strategic indicator, not merely a financial record.

5a. Fundraising and Spending Breakdown (5 points)

Score Description
5 Student presents total raised, cash on hand, total spent, and burn rate for both campaigns. Small-dollar vs. large-dollar donor breakdown is analyzed and its strategic implications discussed. Outside spending (super PACs and dark money, if present) is documented and connected to the overall resource analysis. Student distinguishes between campaign-level and total-ecosystem-level financial comparison.
3–4 Most financial data is present. Donor breakdown is present for at least one campaign. Outside spending is noted but donor transparency issues are not analyzed.
1–2 Basic fundraising totals are reported without analysis of donor composition or outside spending.
0 No campaign finance analysis.

5b. Strategic Resource Deployment Analysis (5 points)

Score Description
5 Student connects financial data to strategic behavior: Where is each campaign deploying resources, and does this reflect the geographic priorities identified in the demographic analysis? What does the burn rate suggest about campaign confidence or financial stress? Does the outside spending pattern reveal a theory of victory that differs from the official campaign narrative?
3–4 Financial data is connected to strategy for at least one campaign. Burn rate or geographic spending distribution is discussed but not fully integrated into the strategic picture.
1–2 Finance analysis is primarily descriptive. Resources are reported but not connected to strategic logic.
0 No strategic resource analysis.

Dimension 6: Forecasting (12 points)

This dimension assesses the student's ability to integrate evidence across analytical dimensions into a coherent forecast with appropriate uncertainty quantification.

6a. Integrated Forecast Construction (6 points)

Score Description
5–6 Student constructs a forecast that explicitly weights at least three evidence streams (polling average, fundamentals, demographic factors) with documented weights summing to 100%. The point estimate is computed from these weighted inputs, not simply asserted. The probability estimate is derived from the point estimate and its uncertainty, with the conversion method explained.
3–4 Forecast is constructed by integrating evidence, but the weighting scheme is implicit rather than explicit. The probability estimate is derived but the conversion method is not fully explained.
1–2 Student produces a forecast by asserting a candidate's likely win probability without showing the integration of evidence streams.
0 No forecast, or forecast is simply the polling average relabeled.

6b. Sensitivity Analysis and Path-to-Victory Scenarios (4 points)

Score Description
4 Student identifies the three to five key assumptions that, if wrong, would flip the forecast outcome. For each, a specific threshold is identified (e.g., "if Millbrook County breaks R+5 rather than R+1, Whitfield wins by X"). A clear path-to-victory description for each candidate is constructed from these sensitivity findings.
2–3 Sensitivity analysis is present for at least two key assumptions. Path-to-victory scenarios are described but may be qualitative rather than quantitative.
1 Student notes that the forecast could change but does not systematically analyze which assumptions are most consequential.
0 No sensitivity analysis.

6c. Uncertainty Communication (2 points)

Score Description
2 Language throughout the forecasting section consistently preserves uncertainty. Probability estimates are communicated in ways that convey the distribution of outcomes (not just the expected outcome). The document explicitly distinguishes between a forecast and a prediction, and explains what a "wrong" forecast would and would not mean.
1 Uncertainty is noted but language lapses into confident prediction in some passages.
0 The forecast is presented as a prediction of the winner. Uncertainty language is absent.

Dimension 7: Equity and Representation Audit (10 points)

This dimension assesses the student's ability to apply an equity lens to the analytical process itself, identifying who is underrepresented in data, what voter access concerns exist, and how the audit's own limitations should be disclosed.

7a. Polling Representation Analysis (4 points)

Score Description
4 Student analyzes the demographic composition of the polling sample (where disclosed) and identifies specific underrepresented populations with quantified gaps (e.g., "Latino respondents represent 24% of the sample vs. 32% of registered voters"). The analytical implications of these gaps — not just their existence — are discussed.
2–3 Underrepresentation is noted for at least one demographic group. The analytical implications are mentioned but not developed.
1 Polling limitations are acknowledged in a general disclaimer but no specific group-level analysis is conducted.
0 No polling representation analysis.

7b. Voter Access Concerns (4 points)

Score Description
4 Student identifies at least two concrete voter access concerns relevant to the race (e.g., polling place changes, registration purges, language access gaps) and analyzes both their potential impact on participation and their implications for the audit's own analytical completeness. The section avoids treating voter access concerns as background noise and instead treats them as first-order analytical facts.
2–3 At least one voter access concern is documented with some analysis of its implications. A second concern is mentioned but not fully developed.
1 Voter access is mentioned in passing without substantive analysis.
0 No voter access analysis.

7c. Audit Self-Assessment: Data Gaps and Limitations (2 points)

Score Description
2 Student applies something analogous to Adaeze's equity checklist to their own document, explicitly identifying the most significant data gaps and limitations in their own analysis. The section does not merely list limitations but explains what effect those limitations have on the confidence that should be placed in specific findings.
1 Some limitations are disclosed in a standard disclaimer section without analysis of their analytical effects.
0 No self-assessment of audit limitations.

Dimension 8: Writing Quality, Integration, and Scholarly Standards (10 points)

This dimension assesses the overall quality of the document as a piece of analytical writing — its coherence, logical flow, citation practices, and professional presentation.

8a. Document Coherence and Section Integration (5 points)

Score Description
5 The document reads as a unified analytical argument, not as a collection of separate section-by-section exercises. The demographic analysis informs the turnout model; the turnout model informs the forecast; the finance analysis contextualizes the media analysis; the equity audit runs as a thread rather than appearing only in its dedicated section. Conclusions explicitly trace back to evidence developed in preceding sections.
3–4 Most sections are connected, with explicit cross-references in at least three places. The conclusions section ties together the major findings but some sections feel isolated from the rest.
1–2 Sections are largely independent. The document reads as a series of discrete exercises sharing a common topic.
0 No integration across sections.

8b. Analytical Voice and Precision (3 points)

Score Description
3 The writing demonstrates the "authoritative-approachable" tone of rigorous political analysis: claims are specific, evidence is cited, uncertainty is hedged precisely rather than vaguely, and the prose does not overclaim or underclaim. The student writes through the data rather than about it.
2 Writing is generally competent and analytical but occasionally lapses into description rather than analysis, or into overclaiming beyond what evidence supports.
1 Writing is primarily descriptive. Claims are made without evidence or with vague hedging ("it seems that" without quantitative support).
0 Writing is unclear or primarily reports facts without analysis.

8c. Citation and Reference Standards (2 points)

Score Description
2 All polls, data sources, news articles, and analytical methods are properly cited in a consistent format. FEC filing references include committee name and report date. Polling citations include pollster, sponsor, field dates, and sample size.
1 Most sources are cited but citation format is inconsistent or incomplete for some poll or data references.
0 Sources are not cited or are cited in a way that does not allow the reader to locate the original materials.

Score Summary

Dimension Points Possible
1. Polling Analysis 22
2. Demographic and Electoral Geography 18
3. Turnout Modeling 16
4. Media and Advertising Analysis 12
5. Campaign Finance Analysis 10
6. Forecasting 12
7. Equity and Representation Audit 10
8. Writing Quality and Integration 10
Total 100

Grade Conversion (Suggested)

Score Grade
93–100 A
90–92 A-
87–89 B+
83–86 B
80–82 B-
77–79 C+
73–76 C
70–72 C-
Below 70 D/F (instructor discretion)

Instructor Notes

Strong work at the A level typically exhibits one or more of the following characteristics not captured in the rubric:

  • The student's analysis arrives at a conclusion that differs from the capstone document's ODA analysis in a well-reasoned way, demonstrating genuine independent analytical judgment.
  • The student identifies an implication or pattern in the data that the capstone document did not develop — a house effect pattern, a campaign finance anomaly, a media framing disparity — and pursues it with additional evidence.
  • The equity audit goes beyond the minimum by seeking out community-level or organizational-level perspectives on voter access that are not contained in the capstone document's data.
  • The writing is sufficiently clear and precise that a non-specialist reader could follow the analysis from beginning to conclusion without getting lost.

Scores below 70 typically reflect one or more of the following: missing major sections (forecasting, equity audit), a polling average that is computed incorrectly or not at all, a turnout model that is purely qualitative, or a document that describes data rather than analyzes it.

Instructors may adjust grade conversion thresholds for their course level and expectations. The rubric assumes a graduate-level or advanced undergraduate course context.