Research Area:  Machine Learning
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification. We release BERTweet under the MIT License to facilitate future research and applications on Tweet data.
Keywords:  
Author(s) Name:  Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:2005.10200
DOI:  10.48550/arXiv.2005.10200
Volume Information:  
Paper Link:   https://arxiv.org/abs/2005.10200