Research Area:  Machine Learning
Labelling image-sentence is expensive and some unsupervised image captioning methods show promising results on caption generation. However, the generated captions are not very relevant to images due to the excessive dependence on the corpus. In order to overcome that drawback, we focus on the correspondence between image and sentence to construct an image caption with better mapping relation. In this paper, we present a novel triple sequence generative adversarial net including an image generator, a discriminator, and a sentence generator. The image generator is used to generate the image regions for words. Meanwhile, the sentence corpus guides the sentence generator based on the generated image regions. The discriminator judges the relevance between the words in the sentence and the generated image regions. In the experiments, we use a large number of unpaired images and sentences to train our model on the unsupervised and unpaired setting. The experimental results demonstrate that our method achieves significant improvements as compared to all baselines.
Keywords:  
labelling
image-sentence
expensive
unsupervised
image captioning
corpus guide
unpaired setting
Author(s) Name:  Marc Aurelio Ranzato, Joshua Susskind, Volodymyr Mnih, Geoffrey Hinton
Journal name:  
Conferrence name:  CPVR 2011
Publisher name:  IEEE
DOI:  https://doi.org/10.1109/CVPR.2011.5995710
Volume Information:  -
Paper Link:   https://ieeexplore.ieee.org/abstract/document/5995710