Research Area:  Machine Learning
In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains. MT-DNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7% (2.2% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations.
Keywords:  
Author(s) Name:  Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:1901.11504
DOI:  10.48550/arXiv.1901.11504
Volume Information:  
Paper Link:   https://arxiv.org/abs/1901.11504