Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

HIBERT:Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization - 2019

Hibert:Document Level Pre-Training Of Hierarchical Bidirectional Transformers For Document Summarization

Research Area:  Machine Learning

Abstract:

Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders cite{devlin:2018:arxiv}, we propose {sc Hibert} (as shorthand for {f HI}erachical {f B}idirectional {f E}ncoder {f R}epresentations from {f T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.

Keywords:  

Author(s) Name:  Xingxing Zhang, Furu Wei, Ming Zhou

Journal name:  Computer Science

Conferrence name:  

Publisher name:  arXiv:1905.06566

DOI:  10.48550/arXiv.1905.06566

Volume Information: