Research Area:  Machine Learning
Cross-lingual representations of words enable us to reason about word meaning in multilingual contexts and are a key facilitator of cross-lingual transfer when developing natural language processing models for low-resource languages. In this survey, we provide a comprehensive typology of cross-lingual word embedding models. We compare their data requirements and objective functions. The recurring theme of the survey is that many of the models presented in the literature optimize for the same objectives, and that seemingly different models are often equivalent, modulo optimization strategies, hyper-parameters, and such. We also discuss the different ways cross-lingual word embeddings are evaluated, as well as future challenges and research horizons.
Keywords:  
Author(s) Name:  Sebastian Ruder, Ivan Vulić , Anders Søgaard
Journal name:  Journal of Artificial Intelligence Research
Conferrence name:  
Publisher name:  ACM
DOI:  10.1613/jair.1.11640
Volume Information:  Volume 65,Issue 1,May 2019, pp 569–630
Paper Link:   https://dl.acm.org/doi/10.1613/jair.1.11640