Part IV: Bayesian and Temporal Data Science

"Bayesian methods do not require you to abandon what you know — they give you a formal framework for combining what you know with what you observe."


Why This Part Exists

Many of the hardest problems in applied data science share a common structure: you have prior knowledge, limited data, and you need to quantify uncertainty — not just make a point prediction.

A pharmaceutical company has clinical intuition about a drug's mechanism of action. A small observational study provides 200 patient records. A frequentist analysis says "not significant." A Bayesian analysis says "the posterior probability of a positive treatment effect is 87%, with a credible interval of [0.2, 1.4]." The Bayesian answer is more useful because it directly addresses the question: "What do we believe, given what we know and what we have observed?"

A content platform needs to decide: should we keep recommending the current top item, or explore a new item that might be better but we are uncertain about? This is the exploration-exploitation tradeoff, and Thompson sampling — a Bayesian algorithm — provides an elegant solution.

A climate research team needs to forecast regional temperatures for the next 50 years. A point forecast is useless for policymakers — they need to know: "What is the range of plausible outcomes?" Probabilistic forecasting with calibrated uncertainty intervals is the only responsible answer.

This part covers Bayesian thinking, practical Bayesian modeling with PyMC, Bayesian optimization and multi-armed bandits, and advanced time series methods that emphasize probabilistic forecasting and uncertainty quantification.

Chapters in This Part

Chapter Focus
20. Bayesian Thinking Priors, posteriors, conjugacy, MAP-MLE connection, prior selection
21. Bayesian Modeling in Practice PyMC, MCMC diagnostics, hierarchical models, Bayesian workflow
22. Bayesian Optimization and Sequential Decisions Gaussian processes, acquisition functions, Thompson sampling, Optuna
23. Advanced Time Series State-space models, TFT, DeepAR, probabilistic forecasting, calibration

Progressive Project Milestone

  • M8 (Chapter 22): Implement Thompson sampling for StreamRec recommendation exploration, balancing exploitation of known-good items with exploration of uncertain items.

Prerequisites

Chapter 3 (Probability Theory) is essential. Chapter 20 can be read independently; Chapters 21-23 build sequentially.

Chapters in This Part