Research Area:  Machine Learning
The human language can be expressed through multiple sources of information known as modalities, including tones of voice, facial gestures, and spoken language. Recent multimodal learning with strong performances on human-centric tasks such as sentiment analysis and emotion recognition are often black-box, with very limited interpretability. In this paper we propose Multimodal Routing, which dynamically adjusts weights between input modalities and output representations differently for each input sample. Multimodal routing can identify relative importance of both individual modalities and cross-modality features. Moreover, the weight assignment by routing allows us to interpret modality-prediction relationships not only globally (i.e. general trends over the whole dataset), but also locally for each single input sample, mean-while keeping competitive performance compared to state-of-the-art methods.
Keywords:  
Multimodal Routing
Interpretability
Multimodal Language Analysis
Machine Learning
Author(s) Name:  Yao-Hung Hubert Tsai,Martin Q. Ma,Muqiao Yang, Ruslan Salakhutdinov, and Louis-Philippe Morency
Journal name:  
Conferrence name:  Proc Conf Empir Methods Nat Lang Process
Publisher name:  National Institutes of Health
DOI:  10.18653/v1/2020.emnlp-main.143
Volume Information:  
Paper Link:   https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8106385/