Skip to yearly menu bar Skip to main content


Poster

Minimal Variance Sampling in Stochastic Gradient Boosting

Bulat Ibragimov · Gleb Gusev

East Exhibition Hall B, C #9

Keywords: [ Algorithms ] [ Boosting and Ensemble Methods ] [ Regularization ] [ Algorithms -> Stochastic Methods; Theory ]


Abstract:

Stochastic Gradient Boosting (SGB) is a widely used approach to regularization of boosting models based on decision trees. It was shown that, in many cases, random sampling at each iteration can lead to better generalization performance of the model and can also decrease the learning time. Different sampling approaches were proposed, where probabilities are not uniform, and it is not currently clear which approach is the most effective. In this paper, we formulate the problem of randomization in SGB in terms of optimization of sampling probabilities to maximize the estimation accuracy of split scoring used to train decision trees.This optimization problem has a closed-form nearly optimal solution, and it leads to a new sampling technique, which we call Minimal Variance Sampling (MVS).The method both decreases the number of examples needed for each iteration of boosting and increases the quality of the model significantly as compared to the state-of-the art sampling methods. The superiority of the algorithm was confirmed by introducing MVS as a new default option for subsampling in CatBoost, a gradient boosting library achieving state-of-the-art quality on various machine learning tasks.

Live content is unavailable. Log in and register to view live content