Research Area:  Machine Learning
Learning effective representations of text words has long been a research focus in natural language processing and other machine learning tasks. In many early tasks, a text word is often represented by a one-hot vector in a discrete manner. Such a solution is not only restricted by the dimension curse, but also unable to reflect the semantic relationships between words. Recent developments focus on the learning of low-dimension and continuous vector representations of text words, known as word embedding, which can be easily applied to downstream tasks such as machine translation, natural language inference, semantic analysis and so on. In this paper, we will introduce the development of word embedding, describe the representative methods, and report its recent research trend. This paper can provide a quick guide for understanding the principle of word embedding and its development.
Author(s) Name:  Qilu Jiao and Shunyao Zhang
Conferrence name:  2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference
Publisher name:  IEEE
Volume Information:  Not Aavailable
Paper Link:   https://ieeexplore.ieee.org/document/9390956