Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

TEDT : Transformer Based Encoding Decoding Translation Network for Multimodal Sentiment Analysis - 2022

tedt-transformer-based-encoding-decoding-translation-network-for-multimodal-sentiment-analysis.jpg

TEDT : Transformer Based Encoding Decoding Translation Network for Multimodal Sentiment Analysis | S-Logix

Research Area:  Machine Learning

Abstract:

Multimodal sentiment analysis is a popular and challenging research topic in natural language processing, but the impact of individual modal data in videos on sentiment analysis results can be different. In the temporal dimension, natural language sentiment is influenced by nonnatural language sentiment, which may enhance or weaken the original sentiment of the current natural language. In addition, there is a general problem of poor quality of nonnatural language features, which essentially hinders the effect of multimodal fusion. To address the above issues, we proposed a multimodal encoding–decoding translation network with a transformer and adopted a joint encoding–decoding method with text as the primary information and sound and image as the secondary information. To reduce the negative impact of nonnatural language data on natural language data, we propose a modality reinforcement cross-attention module to convert nonnatural language features into natural language features to improve their quality and better integrate multimodal features. Moreover, the dynamic filtering mechanism filters out the error information generated in the cross-modal interaction to further improve the final output. We evaluated the proposed method on two multimodal sentiment analysis benchmark datasets (MOSI and MOSEI), and the accuracy of the method was 89.3% and 85.9%, respectively. In addition, our method outperformed the current state-of-the-art methods. Our model can greatly improve the effect of multimodal fusion and more accurately analyze human sentiment.

Keywords:  
Encoding
Decoding
Sentiment Analysis
Multimodal
Transformer
Multimodal attention
Multimodal fusion

Author(s) Name:  Fan Wang, Shengwei Tian,Long Yu, Jing Liu, Junwen Wang, Kun Li & Yongtao Wang

Journal name:  Cognitive Computation

Conferrence name:  

Publisher name:  Springer

DOI:  10.1007/s12559-022-10073-9

Volume Information:  289–303 (2023)