Regularization
← Back to Model Evaluation
Techniques to prevent overfitting by adding constraints or penalties to the model. Essential for generalizing well to unseen data.
Types
- L1 (Lasso) — adds absolute value penalty, encourages sparsity
- L2 (Ridge) — adds squared penalty, shrinks weights
- Dropout — randomly zero neurons during training (neural networks)
- Early Stopping — stop training when validation loss stops improving
- Data Augmentation — artificially expand training data
Related
- Bias-Variance Tradeoff (regularization reduces variance)
- Linear Models (L1/L2 regularization)