Glossary

Regularization strategy:

Dropout: 0.3 between hidden layers (moderately aggressive for this dataset size) - Weight decay: 1e-4 (standard for Adam) - Batch normalization: After each linear layer, before dropout - Early stopping: Patience of 15 epochs, monitoring validation loss

Learn More

Related Terms