List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

Cross-lingual Visual Pre-training for Multimodal Machine Translation - 2021

Cross-lingual Visual Pre-training for Multimodal Machine Translation

Research paper on Cross-lingual Visual Pre-training for Multimodal Machine Translation

Research Area:  Machine Learning

Abstract:

Pre-trained language models have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation language modelling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.

Keywords:  
Multimodal Machine Translation
Pre-trained language models
natural language tasks
Machine Learning
Deep Learning

Author(s) Name:  Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, Lucia Specia

Journal name:   Computation and Language

Conferrence name:  

Publisher name:  arXiv:2101.10044

DOI:  10.48550/arXiv.2101.10044

Volume Information: