Research Area:  Machine Learning
Speech emotion recognition is a challenging research topic that plays a critical role in human-computer interaction. Multimodal inputs further improve the performance as more emotional information is used. However, existing studies learn all the information in the sample while only a small portion of it is about emotion. The redundant information will become noises and limit the system performance. In this paper, a key-sparse Transformer is proposed for efficient emotion recognition by focusing more on emotion related information. The proposed method is evaluated on the IEMOCAP and LSSED. Experimental results show that the proposed method achieves better performance than the state-of-the-art approaches.
Keywords:  
Human computer interaction
Emotion recognition
Fuses
System performance
Conferences
Focusing
Speech recognition
Author(s) Name:  Weidong Chen; Xiaofeng Xing; Xiangmin Xu; Jichen Ya
Journal name:  
Conferrence name:  2022 IEEE International Conference on Acoustics, Speech and Signal Processing
Publisher name:  IEEE
DOI:  10.1109/ICASSP43922.2022.9746598
Volume Information:  
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9746598/