Research Area:  Machine Learning
The Visual Question Answering (VQA) task combines challenges for processing data with both Visual and Linguistic processing, to answer basic ‘common sense’ questions about given images. Given an image and a question in natural language, the VQA system tries to find the correct answer to it using visual elements of the image and inference gathered from textual questions. In this survey, we cover and discuss the recent datasets released in the VQA domain dealing with various types of question-formats and robustness of the machine-learning models. Next, we discuss about new deep learning models that have shown promising results over the VQA datasets. At the end, we present and discuss some of the results computed by us over the vanilla VQA model, Stacked Attention Network and the VQA Challenge 2017 winner model. We also provide the detailed analysis along with the challenges and future research directions.
Keywords:  
Visual Question Answering
Artificial intelligence
Human computer interaction
Deep learning
CNN
LSTM
Author(s) Name:  Yash Srivastava, Vaishnav Murali, Shiv Ram Dubey & Snehasis Mukherjee
Journal name:  
Conferrence name:  International Conference on Computer Vision and Image Processing
Publisher name:  Springer
DOI:  10.1007/978-981-16-1092-9_7
Volume Information:  
Paper Link:   https://link.springer.com/chapter/10.1007/978-981-16-1092-9_7