Research Area:  Machine Learning
State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data. To overcome this challenge, we design a data-oriented approach that first identifies salient information using unsupervised corpus-level statistics, and then learns a task-specific intermediate representation based on a pre-trained language model. We introduce salient span recovery and salient span prediction as denoising training objectives that condense the intra-article and inter-article knowledge essential for keyphrase generation. Through experiments on multiple keyphrase generation benchmarks, we show the effectiveness of the proposed approach for facilitating low-resource and zero-shot keyphrase generation. We further observe that the method especially benefits the generation of absent keyphrases, approaching the performance of models trained with large training sets.
Keywords:  
Representation Learning
Resource-Constrained
Keyphrase Generation
Deep Learning
Machine Learning
Author(s) Name:  Di Wu, Wasi Uddin Ahmad, Sunipa Dev, Kai-Wei Chang
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:2203.08118
DOI:  10.48550/arXiv.2203.08118
Volume Information:  
Paper Link:   https://arxiv.org/abs/2203.08118