Research Area:  Machine Learning
Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.
Keywords:  
Few-Shot Learning
Named Entity Recognition
Medical Text
Machine Learning
Deep Learning
natural language processing (NLP)
Author(s) Name:  Maximilian Hofer, Andrey Kormilitzin, Paul Goldberg, Alejo Nevado-Holgado
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:1811.05468
DOI:  10.48550/arXiv.1811.05468
Volume Information:  
Paper Link:   https://arxiv.org/abs/1811.05468