Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

DistilBERT,a distilled version of BERT:smaller,faster,cheaper and lighter - 2019

Distilbert,A Distilled Version Of Bert:Smaller,Faster,Cheaper And Lighter

Research Area:  Machine Learning

Abstract:

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.

Keywords:  

Author(s) Name:  Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf

Journal name:  Computer Science

Conferrence name:  

Publisher name:  arXiv:1910.01108

DOI:  10.48550/arXiv.1910.01108

Volume Information: