Deep learning predominantly accomplishes success in numerous perception tasks, including visual object recognition, text understanding, and speech recognition. In order to achieve high inference in perception application of deep learning, a probabilistic graph model referred to as Bayesian neural networks have emerged. Bayesian neural networks own the capability to control overfitting by extending the distribution of networks with posterior inference. The probabilistic framework of Bayesian deep learning comprises two components, namely perception, and task-specific.
Bayesian neural networks are implemented in several applications, and some of the application fields are recommender system, computer vision, topic models, natural language processing (NLP), domain adaptation, link prediction, speech recognition, time series forecasting, and healthcare. Bayesian reasoning over structures, hybrid models, neural nonparametric, and inference enhancements are the existing challenges in Bayesian neural networks. For future developments in Bayesian neural networks, more comprehensive research efforts on existing applications and challenges are needed by utilizing efficient Bayesian neural networks.
• Bayesian Deep Learning (BDL) has emerged as a unified probabilistic graphical model which integrates deep learning and Bayesian models.
• Deep Bayesian networks handle deep and nonlinear conditional dependencies more effectively and efficiently than the conventional shallow models.
• Bayesian Deep Learning is widely used for reasoning about uncertainty and mitigating the overfitting problem.
• In recent years, BDL has attracted more popularity and has found successful applications in recommender systems and computer vision.
• In real-world tasks, BDL has significant challenges, such as it is nontrivial to design an efficient Bayesian formulation of neural networks with reasonable time complexity and lack of scalability issues.