Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Deep Hierarchical Encoder-Decoder Network for Image Captioning - 2019

Deep Hierarchical Encoder-Decoder Network For Image Captioning

Research Area:  Machine Learning

Abstract:

Encoder-decoder models have been widely used in image captioning, and most of them are designed via single long short term memory (LSTM). The capacity of single-layer network, whose encoder and decoder are integrated together, is limited for such a complex task of image captioning. Moreover, how to effectively increase the “vertical depth” of encoder-decoder remains to be solved. To deal with these problems, a novel deep hierarchical encoder-decoder network is proposed for image captioning, where a deep hierarchical structure is explored to separate the functions of encoder and decoder. This model is capable of efficiently exerting the representation capacity of deep networks to fuse high level semantics of vision and language in generating captions. Specifically, visual representations in top levels of abstraction are simultaneously considered, and each of these levels is associated to one LSTM. The bottom-most LSTM is applied as the encoder of textual inputs. The application of the middle layer in encoder-decoder is to enhance the decoding ability of top-most LSTM. Furthermore, depending on the introduction of semantic enhancement module of image feature and distribution combine module of text feature, variants of architectures of our model are constructed to explore the impacts and mutual interactions among the visual representation, textual representations, and the output of the middle LSTM layer. Particularly, the framework is training under a reinforcement learning method to address the exposure bias problem between the training and the testing by the policy gradient optimization. Qualitative analyses indicate the process that our model “translates” image to sentence and further visualization presents the evolution of the hidden states from different hierarchical LSTMs over time. Extensive experiments demonstrate that our model outperforms current state-of-the-art models on three benchmark datasets: Flickr8K, Flickr30K, and MSCOCO. On both image captioning and retrieval tasks, our method achieves the best results. On MSCOCO captioning Leaderboard, our method also achieves superior performance.

Keywords:  

Author(s) Name:  Xinyu Xiao; Lingfeng Wang; Kun Ding; Shiming Xiang; Chunhong Pan

Journal name:  IEEE Transactions on Multimedia

Conferrence name:  

Publisher name:  IEEE

DOI:  10.1109/TMM.2019.2915033

Volume Information:  Volume: 21, Issue: 11, November 2019,Page(s): 2942 - 2956