Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Image Captioning with Deep Bidirectional LSTMs and Multi-Task Learning - 2018

Image Captioning With Deep Bidirectional Lstms And Multi-Task Learning

Research Paper on Image Captioning With Deep Bidirectional Lstms And Multi-Task Learning

Research Area:  Machine Learning

Abstract:

Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent overfitting in training deep models. To understand how our models “translate” image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.

Keywords:  
Image Captioning
Deep Bidirectional Lstms
Multi-Task Learning
Machine Learning
Deep Learning

Author(s) Name:  Cheng Wang , Haojin Yang , Christoph Meinel

Journal name:  ACM Transactions on Multimedia Computing, Communications, and Applications

Conferrence name:  

Publisher name:  ACM

DOI:  10.1145/3115432

Volume Information:  Volume 14,Issue 2s,April 2018,Article No.: 40,pp 1–20