Research Area:  Machine Learning
Neural networks have shown promising results for generating text for creative uses. However, current methods require large data sets to generate coherent sentences, which severely limits their creative potential given that the majority of stylistic literary data sets are relatively small. We build on recent advances in transfer learning for natural language processing and demonstrate that generic pre-trained language models can be effectively fine-tuned on small stylistic corpora to generate coherent and creatively expressive text. We empirically show the effectiveness of this method across three distinct literary styles where only a small (e.g. less than 6k) number of tokens are available. We suggest further work for understanding and improving this process, and release our code online1.
Author(s) Name:  Katy Ilonka, Giannis Karamanolakis, Lydia B. Chilton
Journal name:  Computer Science
Conferrence name:  
Publisher name:  Semantic Scholar
DOI:  201760711
Volume Information: