Research Area:  Machine Learning
Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code.
Keywords:  
Universal Language Model
Fine-Tuning
Text Classification
Author(s) Name:  Jeremy Howard, Sebastian Ruder
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:1801.06146
DOI:  10.48550/arXiv.1801.06146
Volume Information:  
Paper Link:   https://arxiv.org/abs/1801.06146