Research Area:  Machine Learning
In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used scores as rewards.
Keywords:  
Discourse-Aware Neural Rewards
Coherent Text Generation
Machine Learning
Deep Learning
Author(s) Name:  Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, Yejin Choi
Journal name:  Computation and Language
Conferrence name:  
Publisher name:  arxiv
DOI:  arXiv:1805.03766
Volume Information:  
Paper Link:   https://arxiv.org/abs/1805.03766