Research Area:  Machine Learning
Depression is a common mental illness that affects the physical and mental health of hundreds of millions of people around the world. Therefore, designing an efficient and robust depression detection model is an urgent research task. In order to fully extract depression features, we systematically analyze audio-visual and text data related to depression, and proposes a multimodal fusion model with multi-level attention mechanism (MFM-Att) for depression detection. The method is mainly divided into two stages: the first stage utilizes two LSTMs and a Bi-LSTM with attention mechanism to learn multi-view audio feature, visual feature and rich text feature, respectively. In the second stage, the output features of the three modalities are sent into the attention fusion network (AttFN) to obtain effective depression information, aiming to make use of the diversity and complementarity between modalities for depression detection. It is worth noting that the multi-level attention mechanism can not only extract valuable depressive features of intra-modality, but also learn the correlations of inter-modality, thereby improving the overall performance of the model by reducing the influence of redundant information. MFM-Att model is evaluated on the DAIC-WOZ dataset, and the result outperforms state-of-the-art models in terms of root mean square error (RMSE).
Keywords:  
Depression
efficient
robust
multi-level attention
audio feature
visual feature
rich text feature
diversity
complementarity
root mean square error
Author(s) Name:  Ming Fang, Siyu Peng, Yujia Liang, Chih-Cheng Hung, Shuhua Liu
Journal name:  Biomedical Signal Processing and Control
Conferrence name:  
Publisher name:  Elsevier
DOI:  https://doi.org/10.1016/j.bspc.2022.104561
Volume Information:  Volume 82
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S1746809422010151