Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Toward Diverse Text Generation with Inverse Reinforcement Learning - 2018

Toward Diverse Text Generation With Inverse Reinforcement Learning

Research Paper on Toward Diverse Text Generation With Inverse Reinforcement Learning

Research Area:  Machine Learning

Abstract:

Text generation is a crucial task in NLP. Recently, several adversarial generative models have been proposed to improve the exposure bias problem in text generation. Though these models gain great success, they still suffer from the problems of reward sparsity and mode collapse. In order to address these two problems, in this paper, we employ inverse reinforcement learning (IRL) for text generation. Specifically, the IRL framework learns a reward function on training data, and then an optimal policy to maximum the expected total reward. Similar to the adversarial models, the reward and policy function in IRL are optimized alternately. Our method has two advantages: (1) the reward function can produce more dense reward signals. (2) the generation policy, trained by "entropy regularized" policy gradient, encourages to generate more diversified texts. Experiment results demonstrate that our proposed method can generate higher quality texts than the previous methods.

Keywords:  
Text Generation
Inverse Reinforcement Learning
Machine Learning
Deep Learning

Author(s) Name:  Zhan Shi, Xinchi Chen, Xipeng Qiu, Xuanjing Huang

Journal name:  Computer Science

Conferrence name:  

Publisher name:  arXiv:1804.11258

DOI:  10.48550/arXiv.1804.11258

Volume Information: