Research Area:  Machine Learning
Boosting has been proven to be effective in improving the generalization of machine learning models in many fields. It is capable of getting high-diversity base learners and getting an accurate ensemble model by combining a sufficient number of weak learners. However, it is rarely used in deep learning due to the high training budget of the neural network. Another method named snapshot ensemble can significantly reduce the training budget, but it is hard to balance the tradeoff between training costs and diversity. Inspired by the ideas of snapshot ensemble and boosting, we propose a method named snapshot boosting. A series of operations are performed to get many base models with high diversity and accuracy, such as the use of the validation set, the boosting-based training framework, and the effective ensemble strategy. Last, we evaluate our method on the computer vision (CV) and the natural language processing (NLP) tasks, and the results show that snapshot boosting can get a more balanced trade-off between training expenses and ensemble accuracy than other well-known ensemble methods.
Keywords:  
Ensemble learning
Deep learning
Boosting
Neural network
Snapshot ensemble
Author(s) Name:  Wentao Zhang, Jiawei Jiang, Yingxia Shao & Bin Cui
Journal name:  Science China Information Sciences
Conferrence name:  
Publisher name:  Elsevier
DOI:  10.1007/s11432-018-9944-x
Volume Information:  Volume 63
Paper Link:   https://link.springer.com/article/10.1007/s11432-018-9944-x