How to Use This Book

This book is designed for multiple reading paths depending on your background and goals. Here is how to get the most out of it.

Reading Paths

If you have programming experience but limited ML background, work through the book sequentially. Each chapter builds on concepts from previous chapters, and the progression is carefully designed to minimize conceptual jumps.

Estimated time: 400–500 hours (including exercises)

Path 2: The Fast Track to Transformers

If you have strong ML foundations and want to focus on transformers and LLMs:

  1. Skim Chapters 1–5 (review math and Python tools)
  2. Skim Chapters 6–10 (ensure ML fundamentals are solid)
  3. Read Chapters 11–13 (deep learning foundations needed for transformers)
  4. Skim Chapters 14–17 (CNNs, RNNs, autoencoders, GANs — useful context)
  5. Deep read Chapters 18–25 (the transformer and LLM core)
  6. Select from Chapters 26–40 based on interest

Estimated time: 200–250 hours

Path 3: The Systems Engineer

If you are building AI-powered applications and need systems knowledge:

  1. Chapters 18–19 (understand attention and transformers)
  2. Chapters 20–21 (pre-training and autoregressive models)
  3. Chapter 23 (prompt engineering)
  4. Chapter 24 (fine-tuning)
  5. Deep read Chapters 31–35 (RAG, agents, serving, MLOps, distributed training)
  6. Capstone Projects 1–3

Estimated time: 150–200 hours

Path 4: Reference Mode

Each chapter is designed to work as a standalone reference. Use the table of contents, glossary (Appendix E), and key takeaways pages to quickly find what you need.

Chapter Structure

Every chapter follows a consistent structure:

Component File Purpose
Main content index.md Core concepts, math, code (~8,000–12,000 words)
Exercises exercises.md 25–40 graduated practice problems
Quiz quiz.md 20–30 self-assessment questions with hidden answers
Case Study 1 case-study-01.md Primary real-world application
Case Study 2 case-study-02.md Secondary real-world application
Key Takeaways key-takeaways.md One-page chapter summary
Further Reading further-reading.md Annotated bibliography
Code Examples code/example-*.py Three standalone Python scripts
Exercise Solutions code/exercise-solutions.py Solutions to programming exercises
Case Study Code code/case-study-code.py Complete code for case studies

How to Approach Each Chapter

  1. Read the chapter overview and learning objectives
  2. Work through the main content with a code editor open — type and run the examples
  3. Complete the quiz to check understanding (aim for 70%+)
  4. Do the exercises — at minimum, all Part A (conceptual) and Part B (calculations) problems
  5. Study at least one case study to see concepts applied to real scenarios
  6. Review key takeaways before moving to the next chapter

Setting Up Your Environment

# Create a virtual environment
python -m venv aibook-env
source aibook-env/bin/activate  # Linux/Mac
# or: aibook-env\Scripts\activate  # Windows

# Install dependencies
pip install -r requirements.txt

# Verify PyTorch installation
python -c "import torch; print(f'PyTorch {torch.__version__}, CUDA: {torch.cuda.is_available()}')"

Hardware Recommendations

Part Minimum Recommended
I–II (Foundations, ML) Any modern CPU Any modern CPU
III (Deep Learning) CPU (slow but works) GPU with 8GB+ VRAM
IV–V (Transformers, Multimodal) GPU with 8GB VRAM GPU with 16GB+ VRAM
VI (Systems) GPU with 8GB VRAM GPU with 24GB+ VRAM or cloud

For chapters requiring significant GPU resources, we provide instructions for using Google Colab and cloud computing alternatives.

Callout Box Guide

Throughout the book, you will encounter these callout boxes:

💡 Intuition: Mental models, analogies, and simplified explanations to build understanding before formalism.

📊 Real-World Application: How this concept is used in industry, with specific company or product examples.

⚠️ Common Pitfall: Mistakes that practitioners frequently make, and how to avoid them.

🎓 Advanced: Graduate-level material that can be skipped on first reading without loss of continuity.

✅ Best Practice: Industry-standard approaches and recommendations.

📝 Note: Additional context, historical background, or supplementary information.

Code Conventions

All code in this book follows these conventions:

  • PEP 8 formatting
  • Google-style docstrings on all functions
  • Type hints on all function signatures
  • torch.manual_seed(42) for reproducibility in all PyTorch code
  • Self-contained examples — each code file can be run independently
  • Progressive framework usage: NumPy (Ch 1–10) → PyTorch (Ch 11+) → HuggingFace (Ch 20+)

Errata and Updates

AI is a fast-moving field. For corrections, updates, and supplementary materials, check the repository associated with this book.