Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Multimodal Feature Fusion By Relational Reasoning and Attention for Visual Question Answering - 2020

multimodal-feature-fusion-by-relational-reasoning-and-attention-for-visual-question-answering.jpg

Multimodal Feature Fusion By Relational Reasoning and Attention for Visual Question Answering | S-Logix

Research Area:  Machine Learning

Abstract:

The recently emerged research of Visual Question Answering (VQA) has become a hot topic in computer vision. A key solution to VQA exists in how to fuse multimodal features extracted from image and question. In this paper, we show that combining visual relationship and attention together achieves more fine-grained feature fusion. Specifically, we design an effective and efficient module to reason complex relationship between visual objects. In addition, a bilinear attention module is learned for question guided attention on visual objects, which allows us to obtain more discriminative visual features. Given an image and a question in natural language, our VQA model learns visual relational reasoning network and attention network in parallel to fuse fine-grained textual and visual features, so that answers can be predicted accurately. Experimental results show that our approach achieves new state-of-the-art performance of single model on both VQA 1.0 and VQA 2.0 datasets.

Keywords:  
Multimodal fusion
Visual question answering
Visual relational reasoning
Attention mechanism

Author(s) Name:  Weifeng Zhang, Jing Yu, Hua Hu, Haiyang Hu, Zengchang Qin

Journal name:  Information Fusion

Conferrence name:  

Publisher name:  Elsevier

DOI:  10.1016/j.inffus.2019.08.009

Volume Information:  Volume 55, March 2020, Pages 116-126