Artificial Intelligence Engineering: Foundational Concepts and Advanced Methods
Complete Table of Contents
Front Matter
Part I: Mathematical and Computational Foundations
Chapters 1–5 · Building the mathematical toolkit every AI engineer needs
| Chapter |
Title |
Pages |
Key Topics |
| 1 |
The Landscape of AI Engineering |
~35 |
AI history, subfields, modern AI stack, career paths |
| 2 |
Linear Algebra for AI |
~35 |
Vectors, matrices, eigendecomposition, SVD, NumPy |
| 3 |
Calculus, Optimization, and Automatic Differentiation |
~35 |
Gradients, chain rule, SGD, Adam, autograd |
| 4 |
Probability, Statistics, and Information Theory |
~35 |
Bayes' theorem, distributions, entropy, KL divergence |
| 5 |
Python for AI Engineering |
~35 |
NumPy, pandas, matplotlib, Jupyter, profiling |
Part II: Machine Learning Fundamentals
Chapters 6–10 · Classical ML algorithms and the art of modeling
| Chapter |
Title |
Pages |
Key Topics |
| 6 |
Supervised Learning: Regression and Classification |
~35 |
Linear/logistic regression, SVMs, decision trees, ensembles |
| 7 |
Unsupervised Learning and Dimensionality Reduction |
~35 |
K-means, DBSCAN, PCA, t-SNE, UMAP |
| 8 |
Model Evaluation, Selection, and Validation |
~35 |
Cross-validation, metrics, bias-variance, hyperparameter tuning |
| 9 |
Feature Engineering and Data Pipelines |
~35 |
Encoding, scaling, feature selection, sklearn pipelines |
| 10 |
Probabilistic and Bayesian Methods |
~35 |
Naive Bayes, Bayesian inference, MCMC, probabilistic programming |
Part III: Deep Learning Foundations
Chapters 11–17 · From perceptrons to generative models
| Chapter |
Title |
Pages |
Key Topics |
| 11 |
Neural Networks from Scratch |
~35 |
Perceptrons, MLPs, activations, backpropagation, PyTorch basics |
| 12 |
Training Deep Networks |
~35 |
Loss functions, optimizers, learning rate schedules, batch norm |
| 13 |
Regularization and Generalization |
~35 |
Dropout, weight decay, data augmentation, early stopping |
| 14 |
Convolutional Neural Networks |
~35 |
Convolutions, pooling, ResNet, transfer learning for vision |
| 15 |
Recurrent Neural Networks and Sequence Modeling |
~35 |
RNNs, LSTMs, GRUs, sequence-to-sequence, teacher forcing |
| 16 |
Autoencoders and Representation Learning |
~35 |
Vanilla AE, VAE, contrastive learning, self-supervised methods |
| 17 |
Generative Adversarial Networks |
~35 |
GAN training, DCGAN, StyleGAN, evaluation metrics |
Chapters 18–25 · The transformer revolution and modern NLP
| Chapter |
Title |
Pages |
Key Topics |
| 18 |
The Attention Mechanism |
~35 |
Bahdanau attention, self-attention, multi-head attention |
| 19 |
The Transformer Architecture |
~35 |
Encoder-decoder, positional encoding, layer norm, building a transformer |
| 20 |
Pre-training and Transfer Learning for NLP |
~35 |
Word2Vec, BERT, masked LM, tokenization, HuggingFace |
| 21 |
Decoder-Only Models and Autoregressive Language Models |
~35 |
GPT architecture, causal masking, text generation, sampling |
| 22 |
Scaling Laws and Large Language Models |
~35 |
Chinchilla scaling, emergent abilities, model families, benchmarks |
| 23 |
Prompt Engineering and In-Context Learning |
~35 |
Zero/few-shot, chain-of-thought, structured outputs, evaluation |
| 24 |
Fine-Tuning Large Language Models |
~35 |
Full fine-tuning, LoRA, QLoRA, PEFT, instruction tuning |
| 25 |
Alignment: RLHF, DPO, and Beyond |
~35 |
Reward modeling, PPO, DPO, constitutional AI, red teaming |
Part V: Beyond Text — Multimodal and Generative AI
Chapters 26–30 · Vision, audio, video, and multimodal intelligence
| Chapter |
Title |
Pages |
Key Topics |
| 26 |
Vision Transformers and Modern Computer Vision |
~35 |
ViT, DeiT, Swin, object detection, segmentation |
| 27 |
Diffusion Models and Image Generation |
~35 |
DDPM, score matching, Stable Diffusion, ControlNet |
| 28 |
Multimodal Models and Vision-Language AI |
~35 |
CLIP, LLaVA, Flamingo, image captioning, VQA |
| 29 |
Speech, Audio, and Music AI |
~35 |
Whisper, TTS, spectrograms, music generation |
| 30 |
Video Understanding and Generation |
~35 |
Video transformers, temporal modeling, video generation |
Part VI: AI Systems Engineering
Chapters 31–35 · Building, deploying, and scaling AI systems
| Chapter |
Title |
Pages |
Key Topics |
| 31 |
Retrieval-Augmented Generation (RAG) |
~35 |
Embeddings, vector databases, chunking, hybrid search |
| 32 |
AI Agents and Tool Use |
~35 |
ReAct, function calling, agent frameworks, planning |
| 33 |
Inference Optimization and Model Serving |
~35 |
Quantization, distillation, KV caching, vLLM, TensorRT |
| 34 |
MLOps and LLMOps |
~35 |
Experiment tracking, CI/CD for ML, monitoring, evaluation |
| 35 |
Distributed Training and Scaling |
~35 |
Data parallelism, model parallelism, FSDP, DeepSpeed |
Part VII: Advanced and Emerging Topics
Chapters 36–39 · Specialized domains and cross-cutting concerns
| Chapter |
Title |
Pages |
Key Topics |
| 36 |
Reinforcement Learning for AI Engineers |
~35 |
MDPs, Q-learning, policy gradients, PPO, GRPO |
| 37 |
Graph Neural Networks and Structured Data |
~35 |
GCN, GAT, message passing, molecular graphs |
| 38 |
Interpretability, Explainability, and Mechanistic Understanding |
~35 |
SHAP, attention visualization, probing, mechanistic interpretability |
| 39 |
AI Safety, Ethics, and Governance |
~35 |
Bias, fairness, regulation, responsible AI |
Part VIII: The Frontier
Chapter 40 · Where AI engineering is headed
Part IX: Capstone Projects
Integrative projects applying concepts across multiple parts
| Project |
Title |
Chapters Applied |
| 1 |
Build a Production RAG System with Guardrails |
20, 23, 31, 32, 33, 34 |
| 2 |
Fine-Tune and Deploy a Domain-Specific LLM |
12, 19, 24, 25, 33, 34 |
| 3 |
End-to-End Multimodal AI Application |
26, 27, 28, 31, 32, 34 |
Appendices
Book Statistics
| Metric |
Value |
| Total Chapters |
40 |
| Total Parts |
8 + Capstone |
| Estimated Pages |
~1,400 |
| Estimated Words |
~560,000 |
| Code Examples |
120+ standalone scripts |
| Exercises |
1,000–1,600 problems |
| Quiz Questions |
800–1,200 |
| Case Studies |
80 |
| Capstone Projects |
3 |