Research Area:  Machine Learning
Generating textual descriptions of images has been an important topic in computer vision and natural language processing. A number of techniques based on deep learning have been proposed on this topic. These techniques use human-annotated images for training and testing the models. These models require a large number of training data to perform at their full potential. Collecting human generated images with associative captions is expensive and time-consuming. In this paper, we propose an image captioning method that uses both real and synthetic data for training and testing the model. We use a Generative Adversarial Network (GAN) based text to image generator to generate synthetic images. We use an attention-based image captioning method trained on both real and synthetic images to generate the captions. We demonstrate the results of our models using both qualitative and quantitative analysis on popularly used evaluation metrics. We show that our experimental results achieve two fold benefits of our proposed work: i) it demonstrates the effectiveness of image captioning for synthetic images, and ii) it further improves the quality of the generated captions for real images, understandably because we use additional images for training.
Keywords:  
Generative Adversarial Network
computer vision
natural language processing
expensive
time-consuming
Author(s) Name:  Md. Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, Hamid Laga
Journal name:  IEEE Access
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/ACCESS.2021.3075579
Volume Information:  Volume 9
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9416431