Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

How to Fine-Tune BERT for Text Classification - 2019

Research Area:  Machine Learning

Abstract:

Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.

Author(s) Name:  Chi Sun,Xipeng Qiu, Yige Xu,Xuanjing Huang

Journal name:  

Conferrence name:  China National Conference on Chinese Computational Linguistics

Publisher name:  SPRINGER

DOI:  10.1007/978-3-030-32381-3_16

Volume Information: