Research Area:  Machine Learning
A simultaneous understanding of questions and images is crucial in Visual Question Answering (VQA). While the existing models have achieved satisfactory performance by associating questions with key objects in images, the answers also contain rich information that can be used to describe the visual contents in images. In this paper, we propose a re-attention framework to utilize the information in answers for the VQA task. The framework first learns the initial attention weights for the objects by calculating the similarity of each word-object pair in the feature space. Then, the visual attention map is reconstructed by re-attending the objects in images based on the answer. Through keeping the initial visual attention map and the reconstructed one to be consistent, the learned visual attention map can be corrected by the answer information. Besides, we introduce a gate mechanism to automatically control the contribution of re-attention to model training based on the entropy of the learned initial visual attention maps. We conduct experiments on three benchmark datasets, and the results demonstrate the proposed model performs favorably against state-of-the-art methods.
Keywords:  
Visual question answering
attention mechanism
re-attention
gating mechanism
Machine Learning
Author(s) Name:   Wenya Guo; Ying Zhang; Jufeng Yang; Xiaojie Yuan
Journal name:   IEEE Transactions on Image Processing
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/TIP.2021.3097180
Volume Information:  Volume: 30,Page(s): 6730 - 6743
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9491928