Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Denoising based Sequence-to-Sequence Pre-training for Text Generation - 2019

Denoising Based Sequence-To-Sequence Pre-Training For Text Generation

Research Area:  Machine Learning

Abstract:

This paper presents a new sequence-to-sequence (seq2seq) pre-training method PoDA (Pre-training of Denoising Autoencoders), which learns representations suitable for text generation tasks. Unlike encoder-only (e.g., BERT) or decoder-only (e.g., OpenAI GPT) pre-training approaches, PoDA jointly pre-trains both the encoder and decoder by denoising the noise-corrupted text, and it also has the advantage of keeping the network architecture unchanged in the subsequent fine-tuning stage. Meanwhile, we design a hybrid model of Transformer and pointer-generator networks as the backbone architecture for PoDA. We conduct experiments on two text generation tasks: abstractive summarization, and grammatical error correction. Results on four datasets show that PoDA can improve model performance over strong baselines without using any task-specific techniques and significantly speed up convergence.

Keywords:  

Author(s) Name:  Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu

Journal name:  Computation and Language

Conferrence name:  

Publisher name:  arxiv

DOI:  arXiv:1908.08206

Volume Information: