Research Area:  Machine Learning
BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L.
Author(s) Name:  Yang Liu
Journal name:  Computer Science
Publisher name:  arXiv:1903.10318
Paper Link:   https://arxiv.org/abs/1903.10318