Boosting
← Back to Ensemble Methods
Train models sequentially, each one focusing on the errors of the previous. Reduces bias by combining many weak learners into a strong learner. XGBoost, LightGBM, and CatBoost are the dominant implementations for tabular data.