Research Area:  Machine Learning
Multimodal learning has gained significant attention in recent years for combining information from different modalities using Deep Neural Networks (DNNs). However, existing approaches often overlook the varying importance of modalities and neglect uncertainty estimation, leading to limited generalization and unreliable predictions. In this paper, we propose a novel algorithm, Dual-level Deep Evidential Fusion (DDEF), to address these challenges by integrating multimodal information at both the Basic Belief Assignment (BBA) level and multimodal level, for enhancing accuracy, robustness, and reliability. The proposed DDEF approach utilizes the Dirichlet framework and BBA methods to connect neural network outputs with Dirichlet distribution parameters, enabling effective uncertainty estimation, and the Dempster-Shafer Theory (DST) is used for dual-level fusion, facilitating the fusion of evidence from two BBA methods and multiple modalities. It has been validated by two experiments on synthetic digit classification, and real-world medical prognosis after brain–computer interface (BCI) treatment, and by demonstrating superior performance compared to existing methods. Our findings emphasize the importance of considering multimodal integration and uncertainty estimation for reliable decision-making in deep learning.
Keywords:  
Integrating multimodal information
decision-making
Evidential Fusion
Author(s) Name:  Zhimin Shao,Weibei Dou,Yu Pan
Journal name:  Information Fusion
Conferrence name:  
Publisher name:  ScienceDirect
DOI:  10.1016/j.inffus.2023.102113
Volume Information:  Volume 103,(2024)
Paper Link:   https://www.sciencedirect.com/science/article/pii/S1566253523004293