EcoSta 2023: Start Registration
View Submission - EcoSta2023
A0953
Title: Improvement of gradient boosting using regularization and optimization algorithms Authors:  Hideo Suzuki - Keio University (Japan) [presenting]
Nagomu Iwasa - Keio University (Japan)
Abstract: Several methods are proposed for gradient boosting by introducing regularization and optimization algorithms, such as Momentum SGD, Adadelta, and Adam. Regularization has the effect of suppressing overfitting by constraining the degrees of freedom of the constructed model. The regularization term is calculated numerically and added to the loss function obtained from the residual between the predicted and measured values. The L1 and L2 regularization terms in the scores of all decision tree leaves are used. In the conventional SGD, the training data is shuffled, one is randomly extracted from it, the error is calculated, and the parameters are updated to reduce the loss function using the gradient method. The optimization algorithms, which are improved versions of the conventional SGD, suppress vibration by using the gradient information from the previous period, which enables us to alleviate the problems of the conventional SGD. To verify the effect of regularization and improved algorithms on gradient boosting, the predictive accuracy and calculation efficiency indicators for several datasets of the UCI Machine Learning Repository are measured, and those of the conventional SGD, SGD (regularization), SGD (improved algorithms) and SGD (regularization +improved algorithms) are compared. The result shows that SGD (regularization+Adam) is generally good regarding prediction accuracy and calculation efficiency.