Research Area:  Machine Learning
Deep learning is a popular tool for image recognition due to its good feature learning capability. However, the training of most deep networks suffers from high computing costs and is time-consuming in image recognition. In this paper, we propose a broad-based ensemble learning model (BELM) that aims to provide a fast and efficient recognition approach for moderate-large scale image sets on an ordinary computer. This model is constructed in the form of a flat network, of which the flatted input consists of different feature nodes mapped from original inputs. The structure is expanded in a wide fashion in feature nodes and broad incremental learning algorithms are developed for the dynamic updating of feature nodes when the system deems to be expanded. Also, Lasso sparse autoencoder is considered for feature nodes to achieve compact and sparse features. Compared with most of the existing state-of-the-art networks, the structure of the proposed BELM is uncomplicated and few parameters need to be adjusted. Extensive experimental results on the classical data sets of the handwritten digital database (MNIST) and the object recognition dataset (NORB) demonstrate the effectiveness of the proposed model.
Keywords:  
Broad learning
Lasso sparsity
Concatenation
Moderate-large scale
Ensemble learning
Image recognition
Machine Learning
Deep Learning
Author(s) Name:  Xiurong Zhong, Shukai Duan & Lidan Wang
Journal name:  Artificial Intelligence Review
Conferrence name:  
Publisher name:  Springer
DOI:  10.1007/s10462-022-10263-9
Volume Information:  
Paper Link:   https://link.springer.com/article/10.1007/s10462-022-10263-9