Research Area:  Machine Learning
We introduce a novel multimodal machine translation model that integrates image features modulated by its caption. Generally, images contain vastly more information rather than just their description. Furthermore, in multimodal machine translation task, feature maps are commonly extracted from pre-trained network for objects. Therefore, it is not appropriate to utilize these feature map directly. To extract the visual features associated with the text, we design a modulation network based on the textual information from the encoder and visual information from the pretrained CNN. However, because multimodal translation data is scarce, using overly complicated models could result in poor performance. For simplicity, we apply a feature-wise multiplicative transformation. Therefore, our model is a modular trainable network embedded in the architecture in existing multimodal translation models. We verified our model by conducting experiments on the Transformer model with the Multi30k dataset and evaluating translation quality using the BLEU and METEOR metrics. In general, our model was an improvements over a text-based model and other existing models.
Keywords:  
Multimodal machine translation
CNN
Machine Learning
Deep Learning
Author(s) Name:  Soonmo Kwon, Byung-Hyun Go, Jong-Hyeok Lee
Journal name:  Pattern Recognition Letters
Conferrence name:  
Publisher name:  Elsevier
DOI:  10.1016/j.patrec.2020.06.010
Volume Information:  Volume 136, August 2020, Pages 212-218
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S0167865520302282