Research Area:  Machine Learning
As the core component of Natural Language Processing (NLP) system, Language Model (LM) can provide word representation and probability indication of word sequences. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. A survey on NNLMs is performed in this paper. The structure of classic NNLMs is described firstly, and then some major improvements are introduced and analyzed. We summarize and compare corpora and toolkits of NNLMs. Further, some research directions of NNLMs are discussed.
Author(s) Name:  Kun Jing, Jungang Xu
Journal name:  Computer Science
Publisher name:  arXiv:1906.03591
Paper Link:   https://arxiv.org/abs/1906.03591