Appendix B: Statistical Tables

This appendix provides essential statistical reference tables for hypothesis testing, confidence interval construction, and probability calculations commonly used in basketball analytics.


B.1 Standard Normal Distribution (Z-Table)

The standard normal distribution table provides the cumulative probability P(Z <= z) for the standard normal random variable Z with mean 0 and standard deviation 1. Use this table for large-sample hypothesis tests and confidence intervals.

How to Read This Table

The z-value is split into two parts: the row indicates the ones and tenths digits, while the column indicates the hundredths digit. For example, to find P(Z <= 1.96), locate row 1.9 and column 0.06 to get 0.9750.

Cumulative Probabilities P(Z <= z)

z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993
3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995
3.3 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997
3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998

Negative Z-Values

For negative z-values, use the symmetry property: P(Z <= -z) = 1 - P(Z <= z)

For example: P(Z <= -1.96) = 1 - 0.9750 = 0.0250

Common Critical Values for Two-Tailed Tests

Confidence Level Alpha (two-tailed) z-critical
90% 0.10 1.645
95% 0.05 1.960
99% 0.01 2.576
99.9% 0.001 3.291

Basketball Application Example: Testing if a player's three-point percentage (38.5% on 200 attempts) differs significantly from the league average (35.0%). Use z = (0.385 - 0.350) / sqrt(0.35 * 0.65 / 200) = 1.04, which is less than 1.96, so we do not reject the null hypothesis at alpha = 0.05.


B.2 Student's t-Distribution Critical Values

The t-distribution is used when the population standard deviation is unknown and must be estimated from the sample. This table provides critical values for various degrees of freedom (df) and significance levels.

Two-Tailed Critical Values

df t(0.10) t(0.05) t(0.025) t(0.01) t(0.005)
1 6.314 12.706 25.452 63.657 127.321
2 2.920 4.303 6.205 9.925 14.089
3 2.353 3.182 4.177 5.841 7.453
4 2.132 2.776 3.495 4.604 5.598
5 2.015 2.571 3.163 4.032 4.773
6 1.943 2.447 2.969 3.707 4.317
7 1.895 2.365 2.841 3.499 4.029
8 1.860 2.306 2.752 3.355 3.833
9 1.833 2.262 2.685 3.250 3.690
10 1.812 2.228 2.634 3.169 3.581
11 1.796 2.201 2.593 3.106 3.497
12 1.782 2.179 2.560 3.055 3.428
13 1.771 2.160 2.533 3.012 3.372
14 1.761 2.145 2.510 2.977 3.326
15 1.753 2.131 2.490 2.947 3.286
16 1.746 2.120 2.473 2.921 3.252
17 1.740 2.110 2.458 2.898 3.222
18 1.734 2.101 2.445 2.878 3.197
19 1.729 2.093 2.433 2.861 3.174
20 1.725 2.086 2.423 2.845 3.153
21 1.721 2.080 2.414 2.831 3.135
22 1.717 2.074 2.405 2.819 3.119
23 1.714 2.069 2.398 2.807 3.104
24 1.711 2.064 2.391 2.797 3.091
25 1.708 2.060 2.385 2.787 3.078
26 1.706 2.056 2.379 2.779 3.067
27 1.703 2.052 2.373 2.771 3.057
28 1.701 2.048 2.368 2.763 3.047
29 1.699 2.045 2.364 2.756 3.038
30 1.697 2.042 2.360 2.750 3.030
35 1.690 2.030 2.342 2.724 2.996
40 1.684 2.021 2.329 2.704 2.971
45 1.679 2.014 2.319 2.690 2.952
50 1.676 2.009 2.311 2.678 2.937
60 1.671 2.000 2.299 2.660 2.915
70 1.667 1.994 2.291 2.648 2.899
80 1.664 1.990 2.284 2.639 2.887
90 1.662 1.987 2.280 2.632 2.878
100 1.660 1.984 2.276 2.626 2.871
120 1.658 1.980 2.270 2.617 2.860
inf 1.645 1.960 2.241 2.576 2.807

One-Tailed Critical Values

For one-tailed tests, use the column header divided by 2. For example, for a one-tailed test at alpha = 0.05, use the t(0.10) column.

Basketball Application Example: Testing if the mean points per game for a sample of 25 games (mean = 112.4, s = 8.5) differs from a hypothesized mean of 108.0. With df = 24, t = (112.4 - 108.0) / (8.5 / sqrt(25)) = 2.59. Since 2.59 > 2.064 (t-critical at alpha = 0.05), we reject the null hypothesis.


B.3 Chi-Square Distribution Critical Values

The chi-square distribution is used for goodness-of-fit tests, tests of independence, and variance tests. This table provides critical values for the right-tail probability.

Right-Tail Critical Values

df X^2(0.995) X^2(0.99) X^2(0.975) X^2(0.95) X^2(0.90) X^2(0.10) X^2(0.05) X^2(0.025) X^2(0.01) X^2(0.005)
1 0.000 0.000 0.001 0.004 0.016 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 4.605 5.991 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.345 12.838
4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.833 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278
8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.042 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801
16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.034 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401
22 8.643 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796
23 9.260 10.196 11.689 13.091 14.848 32.007 35.172 38.076 41.638 44.181
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.559
25 10.520 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.808 12.879 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645
28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993
29 13.121 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.336
30 13.787 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
40 20.707 22.164 24.433 26.509 29.051 51.805 55.758 59.342 63.691 66.766
50 27.991 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490
60 35.534 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952
70 43.275 45.442 48.758 51.739 55.329 85.527 90.531 95.023 100.425 104.215
80 51.172 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321
90 59.196 61.754 65.647 69.126 73.291 107.565 113.145 118.136 124.116 128.299
100 67.328 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.169

Basketball Application Example: Testing if shot distribution across 5 court zones follows the expected distribution. With df = 4, if the calculated chi-square statistic is 12.5 and the critical value at alpha = 0.05 is 9.488, we reject the null hypothesis of uniform distribution.


B.4 F-Distribution Critical Values

The F-distribution is used in ANOVA and regression analysis. This table provides critical values at alpha = 0.05 for the most common degrees of freedom combinations.

F-Critical Values at alpha = 0.05

df1\df2 1 2 3 4 5 6 8 10 12 15 20 30 60 120 inf
1 161.45 199.50 215.71 224.58 230.16 233.99 238.88 241.88 243.91 245.95 248.01 250.10 252.20 253.25 254.31
2 18.51 19.00 19.16 19.25 19.30 19.33 19.37 19.40 19.41 19.43 19.45 19.46 19.48 19.49 19.50
3 10.13 9.55 9.28 9.12 9.01 8.94 8.85 8.79 8.74 8.70 8.66 8.62 8.57 8.55 8.53
4 7.71 6.94 6.59 6.39 6.26 6.16 6.04 5.96 5.91 5.86 5.80 5.75 5.69 5.66 5.63
5 6.61 5.79 5.41 5.19 5.05 4.95 4.82 4.74 4.68 4.62 4.56 4.50 4.43 4.40 4.36
6 5.99 5.14 4.76 4.53 4.39 4.28 4.15 4.06 4.00 3.94 3.87 3.81 3.74 3.70 3.67
7 5.59 4.74 4.35 4.12 3.97 3.87 3.73 3.64 3.57 3.51 3.44 3.38 3.30 3.27 3.23
8 5.32 4.46 4.07 3.84 3.69 3.58 3.44 3.35 3.28 3.22 3.15 3.08 3.01 2.97 2.93
9 5.12 4.26 3.86 3.63 3.48 3.37 3.23 3.14 3.07 3.01 2.94 2.86 2.79 2.75 2.71
10 4.96 4.10 3.71 3.48 3.33 3.22 3.07 2.98 2.91 2.85 2.77 2.70 2.62 2.58 2.54
12 4.75 3.89 3.49 3.26 3.11 3.00 2.85 2.75 2.69 2.62 2.54 2.47 2.38 2.34 2.30
15 4.54 3.68 3.29 3.06 2.90 2.79 2.64 2.54 2.48 2.40 2.33 2.25 2.16 2.11 2.07
20 4.35 3.49 3.10 2.87 2.71 2.60 2.45 2.35 2.28 2.20 2.12 2.04 1.95 1.90 1.84
30 4.17 3.32 2.92 2.69 2.53 2.42 2.27 2.16 2.09 2.01 1.93 1.84 1.74 1.68 1.62
60 4.00 3.15 2.76 2.53 2.37 2.25 2.10 1.99 1.92 1.84 1.75 1.65 1.53 1.47 1.39
120 3.92 3.07 2.68 2.45 2.29 2.18 2.02 1.91 1.83 1.75 1.66 1.55 1.43 1.35 1.25
inf 3.84 3.00 2.60 2.37 2.21 2.10 1.94 1.83 1.75 1.67 1.57 1.46 1.32 1.22 1.00

Basketball Application Example: Testing if the mean points scored differ significantly across 4 quarters using one-way ANOVA with 20 games. With df1 = 3 and df2 = 76, the critical F-value at alpha = 0.05 is approximately 2.72.


B.5 Correlation Critical Values

Critical values for the Pearson correlation coefficient to test H0: rho = 0.

Two-Tailed Critical Values

n r(0.10) r(0.05) r(0.02) r(0.01)
5 0.805 0.878 0.934 0.959
6 0.729 0.811 0.882 0.917
7 0.669 0.754 0.833 0.875
8 0.621 0.707 0.789 0.834
9 0.582 0.666 0.750 0.798
10 0.549 0.632 0.715 0.765
11 0.521 0.602 0.685 0.735
12 0.497 0.576 0.658 0.708
13 0.476 0.553 0.634 0.684
14 0.458 0.532 0.612 0.661
15 0.441 0.514 0.592 0.641
16 0.426 0.497 0.574 0.623
17 0.412 0.482 0.558 0.606
18 0.400 0.468 0.543 0.590
19 0.389 0.456 0.529 0.575
20 0.378 0.444 0.516 0.561
25 0.337 0.396 0.462 0.505
30 0.306 0.361 0.423 0.463
35 0.283 0.334 0.392 0.430
40 0.264 0.312 0.367 0.403
45 0.248 0.294 0.346 0.380
50 0.235 0.279 0.328 0.361
60 0.214 0.254 0.300 0.330
70 0.198 0.235 0.278 0.306
80 0.185 0.220 0.260 0.286
90 0.174 0.207 0.245 0.270
100 0.165 0.197 0.232 0.256
150 0.135 0.160 0.190 0.210
200 0.117 0.139 0.164 0.182
300 0.095 0.113 0.134 0.149
500 0.074 0.088 0.104 0.115

Basketball Application Example: With a sample of n = 30 games, testing the correlation between assists and team wins. If r = 0.42, the critical value at alpha = 0.05 is 0.361. Since 0.42 > 0.361, the correlation is statistically significant.


B.6 Binomial Probabilities

Selected binomial probabilities P(X = k) for common n and p values relevant to basketball analytics.

n = 10 (e.g., 10 free throw attempts)

k p=0.30 p=0.40 p=0.50 p=0.60 p=0.70 p=0.75 p=0.80 p=0.85 p=0.90
0 0.0282 0.0060 0.0010 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000
1 0.1211 0.0403 0.0098 0.0016 0.0001 0.0000 0.0000 0.0000 0.0000
2 0.2335 0.1209 0.0439 0.0106 0.0014 0.0004 0.0001 0.0000 0.0000
3 0.2668 0.2150 0.1172 0.0425 0.0090 0.0031 0.0008 0.0001 0.0000
4 0.2001 0.2508 0.2051 0.1115 0.0368 0.0162 0.0055 0.0012 0.0001
5 0.1029 0.2007 0.2461 0.2007 0.1029 0.0584 0.0264 0.0085 0.0015
6 0.0368 0.1115 0.2051 0.2508 0.2001 0.1460 0.0881 0.0401 0.0112
7 0.0090 0.0425 0.1172 0.2150 0.2668 0.2503 0.2013 0.1298 0.0574
8 0.0014 0.0106 0.0439 0.1209 0.2335 0.2816 0.3020 0.2759 0.1937
9 0.0001 0.0016 0.0098 0.0403 0.1211 0.1877 0.2684 0.3474 0.3874
10 0.0000 0.0001 0.0010 0.0060 0.0282 0.0563 0.1074 0.1969 0.3487

n = 20 (e.g., 20 field goal attempts)

k p=0.40 p=0.45 p=0.50 p=0.55 p=0.60
5 0.0746 0.0365 0.0148 0.0049 0.0013
6 0.1244 0.0746 0.0370 0.0150 0.0049
7 0.1659 0.1221 0.0739 0.0366 0.0146
8 0.1797 0.1623 0.1201 0.0727 0.0355
9 0.1597 0.1771 0.1602 0.1185 0.0710
10 0.1171 0.1593 0.1762 0.1593 0.1171
11 0.0710 0.1185 0.1602 0.1771 0.1597
12 0.0355 0.0727 0.1201 0.1623 0.1797
13 0.0146 0.0366 0.0739 0.1221 0.1659
14 0.0049 0.0150 0.0370 0.0746 0.1244
15 0.0013 0.0049 0.0148 0.0365 0.0746

Basketball Application Example: A player with 80% free throw percentage attempts 10 free throws. The probability of making exactly 8 is 0.3020, making at least 8 is 0.3020 + 0.2684 + 0.1074 = 0.6778.


B.7 Poisson Distribution Probabilities

The Poisson distribution models count data for rare events. Useful for modeling turnovers, steals, and blocks per game.

P(X = k) for various lambda values

k lambda=1 lambda=2 lambda=3 lambda=4 lambda=5 lambda=6 lambda=7 lambda=8
0 0.3679 0.1353 0.0498 0.0183 0.0067 0.0025 0.0009 0.0003
1 0.3679 0.2707 0.1494 0.0733 0.0337 0.0149 0.0064 0.0027
2 0.1839 0.2707 0.2240 0.1465 0.0842 0.0446 0.0223 0.0107
3 0.0613 0.1804 0.2240 0.1954 0.1404 0.0892 0.0521 0.0286
4 0.0153 0.0902 0.1680 0.1954 0.1755 0.1339 0.0912 0.0573
5 0.0031 0.0361 0.1008 0.1563 0.1755 0.1606 0.1277 0.0916
6 0.0005 0.0120 0.0504 0.1042 0.1462 0.1606 0.1490 0.1221
7 0.0001 0.0034 0.0216 0.0595 0.1044 0.1377 0.1490 0.1396
8 0.0000 0.0009 0.0081 0.0298 0.0653 0.1033 0.1304 0.1396
9 0.0000 0.0002 0.0027 0.0132 0.0363 0.0688 0.1014 0.1241
10 0.0000 0.0000 0.0008 0.0053 0.0181 0.0413 0.0710 0.0993
11 0.0000 0.0000 0.0002 0.0019 0.0082 0.0225 0.0452 0.0722
12 0.0000 0.0000 0.0001 0.0006 0.0034 0.0113 0.0264 0.0481

Basketball Application Example: If a team averages 3 blocks per game (lambda = 3), the probability of getting exactly 5 blocks is 0.1008, and the probability of getting 5 or more blocks is approximately 0.1847.


B.8 Quick Reference: Common Hypothesis Testing Scenarios

One-Sample Tests

Test Purpose Distribution Test Statistic When to Use
Mean (known sigma) Z z = (xbar - mu0) / (sigma / sqrt(n)) Large sample, known population SD
Mean (unknown sigma) t t = (xbar - mu0) / (s / sqrt(n)) Small sample, unknown population SD
Proportion Z z = (phat - p0) / sqrt(p0(1-p0)/n) np0 >= 10 and n(1-p0) >= 10
Variance Chi-square chi2 = (n-1)s^2 / sigma0^2 Normal population

Two-Sample Tests

Test Purpose Distribution Degrees of Freedom
Two means (independent, equal variance) t n1 + n2 - 2
Two means (independent, unequal variance) t Welch approximation
Two proportions Z N/A (use normal approximation)
Two variances F df1 = n1 - 1, df2 = n2 - 1

ANOVA

Test Distribution Degrees of Freedom
One-way ANOVA F df1 = k - 1, df2 = N - k
Two-way ANOVA (Factor A) F df1 = a - 1, df2 = N - ab
Two-way ANOVA (Factor B) F df1 = b - 1, df2 = N - ab
Interaction F df1 = (a-1)(b-1), df2 = N - ab

B.9 Sample Size Determination

Sample Size for Estimating a Mean

$$n = \left(\frac{z_{\alpha/2} \cdot \sigma}{E}\right)^2$$

where E is the margin of error.

Confidence Level z-value For sigma=10, E=2 For sigma=10, E=1
90% 1.645 68 271
95% 1.960 97 385
99% 2.576 166 664

Sample Size for Estimating a Proportion

$$n = \frac{z_{\alpha/2}^2 \cdot p(1-p)}{E^2}$$

For maximum variance (p = 0.5):

Confidence Level E = 0.05 E = 0.03 E = 0.01
90% 271 752 6,766
95% 385 1,068 9,604
99% 666 1,849 16,641

Basketball Application Example: To estimate a player's true three-point percentage within 3 percentage points with 95% confidence, you need approximately n = (1.96)^2 * (0.35)(0.65) / (0.03)^2 = 971 attempts.


B.10 Effect Size Reference Tables

Cohen's d Interpretation

Effect Size Cohen's d Interpretation
Small 0.2 Difference is subtle
Medium 0.5 Difference is noticeable
Large 0.8 Difference is obvious

Correlation Coefficient Interpretation

Correlation r Interpretation
Negligible 0.0 - 0.1 No practical relationship
Weak 0.1 - 0.3 Small relationship
Moderate 0.3 - 0.5 Medium relationship
Strong 0.5 - 0.7 Large relationship
Very Strong 0.7 - 0.9 Very large relationship
Near Perfect 0.9 - 1.0 Near-deterministic relationship

R-squared Interpretation for Regression

R-squared Interpretation in Social Sciences
0.01 Weak
0.09 Moderate
0.25 Substantial

Note: In basketball analytics, R-squared values tend to be lower due to high game-to-game variance. An R-squared of 0.15-0.25 for single-game predictions may be considered reasonable.


B.11 Power Analysis Reference

Statistical power is the probability of correctly rejecting a false null hypothesis. This table shows required sample sizes for various power levels when detecting a medium effect size (d = 0.5) with alpha = 0.05.

Sample Size per Group for Two-Sample t-Test

Power d = 0.2 d = 0.5 d = 0.8
0.70 310 50 20
0.80 393 64 26
0.85 458 74 30
0.90 542 88 36
0.95 651 105 42

Basketball Application Example: To detect a difference of 2 points per game (approximately d = 0.5 with typical game variance) with 80% power, you need approximately 64 games per group being compared.


This appendix provides reference values for common statistical procedures. For calculations beyond these tables, use statistical software such as Python's scipy.stats module or R's built-in distribution functions.