Appendix B: Statistical Tables
This appendix contains the core reference tables needed to perform and interpret the statistical analyses discussed throughout the textbook. Each table is preceded by a brief explanation of when and how to use it. For conceptual explanations of the underlying statistics, see Appendix A.
B.1 Standard Normal (Z) Distribution Table
When to use: When you know a z-score (a value standardized to have mean 0 and standard deviation 1) and need the cumulative probability P(Z ≤ z) — that is, the area under the normal curve to the LEFT of z. To find the area to the RIGHT, subtract the table value from 1.00.
How to read: The row gives the ones and tenths digit of z; the column gives the hundredths digit. The table below shows selected z-values in increments of 0.10 for clarity.
| z | P(Z ≤ z) | z | P(Z ≤ z) | z | P(Z ≤ z) |
|---|---|---|---|---|---|
| −3.50 | 0.0002 | −1.10 | 0.1357 | +1.30 | 0.9032 |
| −3.40 | 0.0003 | −1.00 | 0.1587 | +1.40 | 0.9192 |
| −3.30 | 0.0005 | −0.90 | 0.1841 | +1.50 | 0.9332 |
| −3.20 | 0.0007 | −0.80 | 0.2119 | +1.60 | 0.9452 |
| −3.10 | 0.0010 | −0.70 | 0.2420 | +1.70 | 0.9554 |
| −3.00 | 0.0013 | −0.60 | 0.2743 | +1.80 | 0.9641 |
| −2.90 | 0.0019 | −0.50 | 0.3085 | +1.90 | 0.9713 |
| −2.80 | 0.0026 | −0.40 | 0.3446 | +2.00 | 0.9772 |
| −2.70 | 0.0035 | −0.30 | 0.3821 | +2.10 | 0.9821 |
| −2.60 | 0.0047 | −0.20 | 0.4207 | +2.20 | 0.9861 |
| −2.50 | 0.0062 | −0.10 | 0.4602 | +2.30 | 0.9893 |
| −2.40 | 0.0082 | 0.00 | 0.5000 | +2.40 | 0.9918 |
| −2.30 | 0.0107 | +0.10 | 0.5398 | +2.50 | 0.9938 |
| −2.20 | 0.0139 | +0.20 | 0.5793 | +2.60 | 0.9953 |
| −2.10 | 0.0179 | +0.30 | 0.6179 | +2.70 | 0.9965 |
| −2.00 | 0.0228 | +0.40 | 0.6554 | +2.80 | 0.9974 |
| −1.90 | 0.0287 | +0.50 | 0.6915 | +2.90 | 0.9981 |
| −1.80 | 0.0359 | +0.60 | 0.7257 | +3.00 | 0.9987 |
| −1.70 | 0.0446 | +0.70 | 0.7580 | +3.10 | 0.9990 |
| −1.60 | 0.0548 | +0.80 | 0.7881 | +3.20 | 0.9993 |
| −1.50 | 0.0668 | +0.90 | 0.8159 | +3.30 | 0.9995 |
| −1.40 | 0.0808 | +1.00 | 0.8413 | +3.40 | 0.9997 |
| −1.30 | 0.0968 | +1.10 | 0.8643 | +3.50 | 0.9998 |
| −1.20 | 0.1151 | +1.20 | 0.8849 |
Key critical values to memorize: - z = ±1.645: two-tailed α = 0.10 (one-tailed α = 0.05) - z = ±1.960: two-tailed α = 0.05 (one-tailed α = 0.025) - z = ±2.326: two-tailed α = 0.02 (one-tailed α = 0.01) - z = ±2.576: two-tailed α = 0.01 (one-tailed α = 0.005)
B.2 Critical t-Values Table
When to use: The t-distribution is used instead of the z-distribution when the population standard deviation is unknown and must be estimated from sample data. As sample size n increases, degrees of freedom df = n − 1 increase, and the t-distribution approaches the standard normal. Use this table to find the critical value tα,df such that P(T > tα,df) = α.
One-tailed vs. two-tailed: For a two-tailed test at α = 0.05, use the column labeled "Two-tail 0.05" (which is the same as one-tail 0.025).
| df | One-tail 0.10 | One-tail 0.05 | One-tail 0.025 | One-tail 0.01 | One-tail 0.005 |
|---|---|---|---|---|---|
| Two-tail 0.20 | Two-tail 0.10 | Two-tail 0.05 | Two-tail 0.02 | Two-tail 0.01 | |
| 1 | 3.078 | 6.314 | 12.706 | 31.821 | 63.657 |
| 2 | 1.886 | 2.920 | 4.303 | 6.965 | 9.925 |
| 3 | 1.638 | 2.353 | 3.182 | 4.541 | 5.841 |
| 4 | 1.533 | 2.132 | 2.776 | 3.747 | 4.604 |
| 5 | 1.476 | 2.015 | 2.571 | 3.365 | 4.032 |
| 6 | 1.440 | 1.943 | 2.447 | 3.143 | 3.707 |
| 7 | 1.415 | 1.895 | 2.365 | 2.998 | 3.499 |
| 8 | 1.397 | 1.860 | 2.306 | 2.896 | 3.355 |
| 9 | 1.383 | 1.833 | 2.262 | 2.821 | 3.250 |
| 10 | 1.372 | 1.812 | 2.228 | 2.764 | 3.169 |
| 11 | 1.363 | 1.796 | 2.201 | 2.718 | 3.106 |
| 12 | 1.356 | 1.782 | 2.179 | 2.681 | 3.055 |
| 13 | 1.350 | 1.771 | 2.160 | 2.650 | 3.012 |
| 14 | 1.345 | 1.761 | 2.145 | 2.624 | 2.977 |
| 15 | 1.341 | 1.753 | 2.131 | 2.602 | 2.947 |
| 16 | 1.337 | 1.746 | 2.120 | 2.583 | 2.921 |
| 17 | 1.333 | 1.740 | 2.110 | 2.567 | 2.898 |
| 18 | 1.330 | 1.734 | 2.101 | 2.552 | 2.878 |
| 19 | 1.328 | 1.729 | 2.093 | 2.539 | 2.861 |
| 20 | 1.325 | 1.725 | 2.086 | 2.528 | 2.845 |
| 21 | 1.323 | 1.721 | 2.080 | 2.518 | 2.831 |
| 22 | 1.321 | 1.717 | 2.074 | 2.508 | 2.819 |
| 23 | 1.319 | 1.714 | 2.069 | 2.500 | 2.807 |
| 24 | 1.318 | 1.711 | 2.064 | 2.492 | 2.797 |
| 25 | 1.316 | 1.708 | 2.060 | 2.485 | 2.787 |
| 26 | 1.315 | 1.706 | 2.056 | 2.479 | 2.779 |
| 27 | 1.314 | 1.703 | 2.052 | 2.473 | 2.771 |
| 28 | 1.313 | 1.701 | 2.048 | 2.467 | 2.763 |
| 29 | 1.311 | 1.699 | 2.045 | 2.462 | 2.756 |
| 30 | 1.310 | 1.697 | 2.042 | 2.457 | 2.750 |
| 40 | 1.303 | 1.684 | 2.021 | 2.423 | 2.704 |
| 60 | 1.296 | 1.671 | 2.000 | 2.390 | 2.660 |
| 120 | 1.289 | 1.658 | 1.980 | 2.358 | 2.617 |
| ∞ | 1.282 | 1.645 | 1.960 | 2.326 | 2.576 |
Note: At df = ∞, the t-distribution equals the standard normal distribution, so the critical values match those from Table B.1.
B.3 Chi-Square (χ²) Critical Values Table
When to use: The chi-square test is used for categorical data — most commonly to test (a) goodness-of-fit (does an observed frequency distribution match an expected one?) or (b) independence (are two categorical variables associated?). Degrees of freedom for independence tests = (rows − 1)(columns − 1).
Misinformation applications: Testing whether misinformation sharing varies significantly across age groups (independence test); testing whether the distribution of false claim types matches a theoretical model (goodness-of-fit).
The table gives critical values χ²α,df such that P(χ² > χ²α,df) = α.
| df | α = 0.10 | α = 0.05 | α = 0.025 | α = 0.01 | α = 0.001 |
|---|---|---|---|---|---|
| 1 | 2.706 | 3.841 | 5.024 | 6.635 | 10.828 |
| 2 | 4.605 | 5.991 | 7.378 | 9.210 | 13.816 |
| 3 | 6.251 | 7.815 | 9.348 | 11.345 | 16.266 |
| 4 | 7.779 | 9.488 | 11.143 | 13.277 | 18.467 |
| 5 | 9.236 | 11.070 | 12.832 | 15.086 | 20.515 |
| 6 | 10.645 | 12.592 | 14.449 | 16.812 | 22.458 |
| 7 | 12.017 | 14.067 | 16.013 | 18.475 | 24.322 |
| 8 | 13.362 | 15.507 | 17.535 | 20.090 | 26.124 |
| 9 | 14.684 | 16.919 | 19.023 | 21.666 | 27.877 |
| 10 | 15.987 | 18.307 | 20.483 | 23.209 | 29.588 |
| 11 | 17.275 | 19.675 | 21.920 | 24.725 | 31.264 |
| 12 | 18.549 | 21.026 | 23.337 | 26.217 | 32.910 |
| 13 | 19.812 | 22.362 | 24.736 | 27.688 | 34.528 |
| 14 | 21.064 | 23.685 | 26.119 | 29.141 | 36.123 |
| 15 | 22.307 | 24.996 | 27.488 | 30.578 | 37.697 |
| 16 | 23.542 | 26.296 | 28.845 | 32.000 | 39.252 |
| 17 | 24.769 | 27.587 | 30.191 | 33.409 | 40.790 |
| 18 | 25.989 | 28.869 | 31.526 | 34.805 | 42.312 |
| 19 | 27.204 | 30.144 | 32.852 | 36.191 | 43.820 |
| 20 | 28.412 | 31.410 | 34.170 | 37.566 | 45.315 |
B.4 F-Distribution Critical Values (α = 0.05)
When to use: The F-test compares variances or is used in ANOVA (Analysis of Variance) to test whether means differ across three or more groups. The F-statistic has two degrees of freedom parameters: df₁ (numerator, related to the number of groups) and df₂ (denominator, related to the total sample size).
Misinformation application: Comparing mean belief in misinformation scores across three or more experimental conditions (e.g., control, mild inoculation, strong inoculation).
Critical values Fα=0.05 (df₁, df₂) such that P(F > Fcrit) = 0.05:
| df₂ \ df₁ | 1 | 2 | 3 | 4 | 5 | 6 |
|---|---|---|---|---|---|---|
| 10 | 4.965 | 4.103 | 3.708 | 3.478 | 3.326 | 3.217 |
| 12 | 4.747 | 3.885 | 3.490 | 3.259 | 3.106 | 2.996 |
| 15 | 4.543 | 3.682 | 3.287 | 3.056 | 2.901 | 2.790 |
| 20 | 4.351 | 3.493 | 3.098 | 2.866 | 2.711 | 2.599 |
| 24 | 4.260 | 3.403 | 3.009 | 2.776 | 2.621 | 2.508 |
| 30 | 4.171 | 3.316 | 2.922 | 2.690 | 2.534 | 2.421 |
| 40 | 4.085 | 3.232 | 2.839 | 2.606 | 2.449 | 2.336 |
| 60 | 4.001 | 3.150 | 2.758 | 2.525 | 2.368 | 2.254 |
| 120 | 3.920 | 3.072 | 2.680 | 2.447 | 2.290 | 2.175 |
| ∞ | 3.841 | 2.996 | 2.605 | 2.372 | 2.214 | 2.099 |
B.5 Cohen's d Effect Size Interpretation Guide
Cohen's d measures the standardized difference between two group means. Use this guide to interpret computed d values in the context of intervention studies, experiments, and survey comparisons throughout the textbook.
| d value | Verbal label | Practical meaning |
|---|---|---|
| 0.00 – 0.19 | Negligible | Effects too small to be practically meaningful in most contexts |
| 0.20 – 0.49 | Small | Noticeable but modest; may be meaningful in policy contexts at scale |
| 0.50 – 0.79 | Medium | Visible to careful observers; typically the minimum threshold for practical significance |
| 0.80 – 1.19 | Large | Clearly visible effect; strong practical significance |
| 1.20 – 1.99 | Very large | Exceptional effect, unusual in behavioral research |
| ≥ 2.00 | Huge | Extremely rare in social/behavioral research; warrants scrutiny |
Benchmarks from misinformation research: - Accuracy nudge interventions: d ≈ 0.10 – 0.25 (small) - Inoculation (prebunking) interventions: d ≈ 0.40 – 0.60 (small to medium) - Media literacy training (multi-session): d ≈ 0.30 – 0.50 (small to medium) - Partisan identity effects on belief: d ≈ 0.70 – 1.00 (medium to large)
Hedges' g: When sample sizes differ between groups, Hedges' g is preferred over Cohen's d. It applies a correction factor: g = d × (1 − 3/(4df − 1)). For sample sizes above 20 per group, d and g are nearly identical.
B.6 Common Correlation Coefficient Benchmarks
Pearson r measures the strength of linear association between two continuous variables. Spearman ρ (rho) measures monotonic association and is used for ordinal data or when the normality assumption is violated.
General Interpretation (Cohen, 1988)
| |r| range | Effect label | Common examples | |-----------|------------|-----------------| | 0.00 – 0.09 | Negligible | Background noise in most behavioral measures | | 0.10 – 0.29 | Small | Typical effect of single items on attitudes | | 0.30 – 0.49 | Medium | Educational and psychological interventions | | 0.50 – 0.69 | Large | Strong predictors of behavior | | 0.70 – 0.89 | Very large | Near-perfect instrument reliability; strong sociological predictors | | 0.90 – 1.00 | Near-perfect | Physical measurement relationships; test-retest of stable traits |
Selected Correlations from Misinformation Literature
| Correlation | Approximate r | Source context |
|---|---|---|
| Analytical thinking × accuracy at identifying false news | −0.15 to −0.30 | Pennycook & Rand, 2019 |
| Exposure to misinformation × belief in misinformation | +0.20 to +0.40 | Various meta-analyses |
| Media literacy score × sharing of misinformation | −0.25 to −0.45 | Various |
| Partisan identity × acceptance of partisan misinformation | +0.40 to +0.60 | Leeper & Slothuus, 2014 |
| Repetition frequency × perceived truth (illusory truth) | +0.20 to +0.35 | Dechêne et al., 2010 |
| Trust in mainstream media × belief in conspiracy theories | −0.35 to −0.55 | Various |
Note on r²: The squared correlation (coefficient of determination) indicates the proportion of shared variance. An r = 0.30 means r² = 0.09 — only 9% of variance is explained. This is a useful reminder that even "medium" correlations leave most variance unexplained.
B.7 Quick Reference: Which Test to Use
| Research question | Data type | Recommended test |
|---|---|---|
| Is the mean different from a known value? | Continuous | One-sample t-test |
| Do two group means differ? | Continuous | Independent-samples t-test |
| Do paired measurements differ? | Continuous | Paired-samples t-test |
| Do three+ group means differ? | Continuous | One-way ANOVA + F-test |
| Are two categorical variables associated? | Categorical | Chi-square test of independence |
| Does observed frequency match expected? | Categorical | Chi-square goodness-of-fit |
| Is there a linear association? | Continuous | Pearson correlation |
| Is there a monotonic association? | Ordinal | Spearman correlation |
| Does a proportion equal a target value? | Binary | One-sample z-test for proportions |
| Do two proportions differ? | Binary | Two-sample z-test for proportions |
| Non-normal, comparing two groups | Any | Mann-Whitney U test |
| Non-normal, comparing three+ groups | Any | Kruskal-Wallis test |
B.8 Notes on Multiple Comparisons
When conducting multiple statistical tests simultaneously, the probability of obtaining at least one false positive (Type I error) increases beyond the nominal α level. This is called the multiple comparisons problem or familywise error rate inflation.
Bonferroni correction: To maintain a familywise error rate of α across k tests, use α/k as the significance threshold for each individual test. If conducting 10 tests at α = 0.05, use p < 0.005 as the threshold for each.
False Discovery Rate (FDR): The Benjamini-Hochberg procedure is less conservative than Bonferroni. It controls the expected proportion of false positives among all rejected null hypotheses. Recommended when testing many hypotheses simultaneously (e.g., in genomic or large-scale NLP studies).
In misinformation research, multiple comparisons frequently arise when testing the effect of an intervention across many subgroups (age, gender, political affiliation, education level). Researchers should pre-register their primary hypotheses and apply appropriate corrections for secondary analyses.