Research Area:  Machine Learning
Visual Question Answering (VQA) in the medical domain has attracted more attention from research communities in the last few years due to its various applications. This paper investigates several deep learning approaches in building a medical VQA system based on ImageCLEF’s VQA-Med dataset, which consists of about 4K images with about 15K question-answer pairs. Due to the wide variety of the images and questions included in this dataset, the proposed model is a hierarchical one consisting of many sub-models, each tailored to handle certain questions. For that, a special model is built to classify the questions into four categories, where each category is handled by a separate sub-model. At their core, all of these models consist of pre-trained Convolution Neural Networks (CNN). In order to get the best results, extensive experiments are performed and various techniques are employed including Data Augmentation (DA), Multi-Task Learning (MTL), Global Average Pooling (GAP), Ensembling, and Sequence to Sequence (Seq2Seq) models. Overall, the final model achieves 60.8 accuracy and 63.4 BLEU score, which are competitive with the state-of-the-art results despite using less demanding and simpler sub-models.
Keywords:  
Visual question answering
medical domain
deep learning
Convolution Neural Networks
Multi-Task Learning
Data Augmentation
Author(s) Name:  Aisha Al-Sadi, Mahmoud Al-Ayyoub, Yaser Jararweh, Fumie Costen
Journal name:  Pattern Recognition Letters
Conferrence name:  
Publisher name:  Elsevier
DOI:  10.1016/j.patrec.2021.07.002
Volume Information:  Volume 150, October 2021, Pages 57-75
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S0167865521002348