List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

A text-based visual context modulation neural model for multimodal machine translation - 2020

A text-based visual context modulation neural model for multimodal machine translation

Research paper on A text-based visual context modulation neural model for multimodal machine translation

Research Area:  Machine Learning

Abstract:

We introduce a novel multimodal machine translation model that integrates image features modulated by its caption. Generally, images contain vastly more information rather than just their description. Furthermore, in multimodal machine translation task, feature maps are commonly extracted from pre-trained network for objects. Therefore, it is not appropriate to utilize these feature map directly. To extract the visual features associated with the text, we design a modulation network based on the textual information from the encoder and visual information from the pretrained CNN. However, because multimodal translation data is scarce, using overly complicated models could result in poor performance. For simplicity, we apply a feature-wise multiplicative transformation. Therefore, our model is a modular trainable network embedded in the architecture in existing multimodal translation models. We verified our model by conducting experiments on the Transformer model with the Multi30k dataset and evaluating translation quality using the BLEU and METEOR metrics. In general, our model was an improvements over a text-based model and other existing models.

Keywords:  
Multimodal machine translation
CNN
Machine Learning
Deep Learning

Author(s) Name:  Soonmo Kwon, Byung-Hyun Go, Jong-Hyeok Lee

Journal name:  Pattern Recognition Letters

Conferrence name:  

Publisher name:  Elsevier

DOI:  10.1016/j.patrec.2020.06.010

Volume Information:  Volume 136, August 2020, Pages 212-218