Research Area:  Machine Learning
Existing studies learn sentiment-specific word representations to boost the performance of Twitter sentiment classification, via encoding both n-gram and distant supervised tweet sentiment information in learning process. Pioneer efforts explicitly or implicitly assume that all words within a tweet have the same sentiment polarity as that of the whole tweet, which basically ignores the word its own sentiment polarity. To alleviate this problem, we propose to learn sentiment-specific word embedding by exploiting both the lexicon resource and distant supervised information. In particular, we develop a multi-level sentiment-enriched word embedding learning method, which employs a parallel asymmetric neural network to model n-gram, word-level sentiment, and tweet-level sentiment in the learning process. Extensive experiments on standard benchmarks demonstrate our approach outperforms state-of-the-art methods.
Keywords:  
Author(s) Name:  Shufeng Xiong, Hailian Lv, Weiting Zhao, Donghong Jib
Journal name:  Neurocomputing
Conferrence name:  
Publisher name:  ELSEVIER
DOI:  https://doi.org/10.1016/j.neucom.2017.11.023
Volume Information:  Volume 275, 31 January 2018, Pages 2459-2466
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S0925231217317599