AI Features

XGBoost Hyperparameters: Tuning the Learning Rate

Learn how the learning rate can be adjusted to improve the performance of the random forest model trained with XGBoost.

Impact of learning rate on model performance

The learning rate is also referred to as eta in the XGBoost documentation, as well as step size shrinkage. This hyperparameter controls how much of a contribution each new estimator will make to the ensemble prediction. If you increase the learning rate, you may reach the optimal model, defined as having the highest performance on the validation set, faster. However, there is the danger that setting it too high will result in boosting steps that are too large. In this case, the gradient boosting ...