Research Area:  Machine Learning
Deep belief networks (DBNs) with outstanding advantages of learning input data features have attained particular attention and are applied widely in image processing, speech recognition, natural language interpretation, disease diagnosis, among others. However, owing to large data, the training processes of DBNs are time-consuming and may not satisfy the requirements of real-time application systems. In this study, a single dataset is decomposed into multiple subdatasets that are distributed to multiple computing nodes. Multiple computing nodes learn the features of their own subdatasets. On the precondition of the remaining features where one computing node learns from the total dataset, the single dataset learning models and algorithms are extended to the cases where multiple computing nodes learn multiple subdatasets in a parallel manner. Learning models and algorithms are proposed for the parallel computing of DBN learning processes. A master–slave parallel computing structure is designed, where the slave computing nodes learn the features of their respective subdatasets and transmit them to the master computing node. The master computing node is critical in synthesizing the learned features from the respective slave computing nodes. The broadcast, synchronization, and synthesis are repeated until all features of subdatasets have been learned. The proposed parallel computing method is applied to traffic flow prediction using practical traffic flow data. Our experimental results verify the effectiveness of the parallel computing method of DBN learning processes in terms of decreasing pre-training and fine-tuning times and maintaining the prominent feature learning abilities.
Author(s) Name:  Lu Zhao,Yonghu,Zhoua HuapuLu and Hamido Fujita
Journal name:  Knowledge-Based Systems
Publisher name:  ELSEVIER
Volume Information:  Volume 163, 1 January 2019, Pages 972-987
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S0950705118305112