Amazing technological breakthrough possible @S-Logix

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • +91- 81240 01111

Social List

Parallel computing method of deep belief networks and its application to traffic flow prediction - 2019

Parallel Computing Method Of Deep Belief Networks And Its Application To Traffic Flow Prediction

Research Area:  Machine Learning


Deep belief networks (DBNs) with outstanding advantages of learning input data features have attained particular attention and are applied widely in image processing, speech recognition, natural language interpretation, disease diagnosis, among others. However, owing to large data, the training processes of DBNs are time-consuming and may not satisfy the requirements of real-time application systems. In this study, a single dataset is decomposed into multiple subdatasets that are distributed to multiple computing nodes. Multiple computing nodes learn the features of their own subdatasets. On the precondition of the remaining features where one computing node learns from the total dataset, the single dataset learning models and algorithms are extended to the cases where multiple computing nodes learn multiple subdatasets in a parallel manner. Learning models and algorithms are proposed for the parallel computing of DBN learning processes. A master–slave parallel computing structure is designed, where the slave computing nodes learn the features of their respective subdatasets and transmit them to the master computing node. The master computing node is critical in synthesizing the learned features from the respective slave computing nodes. The broadcast, synchronization, and synthesis are repeated until all features of subdatasets have been learned. The proposed parallel computing method is applied to traffic flow prediction using practical traffic flow data. Our experimental results verify the effectiveness of the parallel computing method of DBN learning processes in terms of decreasing pre-training and fine-tuning times and maintaining the prominent feature learning abilities.


Author(s) Name:  Lu Zhao,Yonghu,Zhoua HuapuLu and Hamido Fujita

Journal name:  Knowledge-Based Systems

Conferrence name:  

Publisher name:  ELSEVIER

DOI:  10.1016/j.knosys.2018.10.025

Volume Information:  Volume 163, 1 January 2019, Pages 972-987