Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Research Topics in Pre-training of Deep Bidirectional Transformers for Language Understanding

Research Topics in Pre-training of Deep Bidirectional Transformers for Language Understanding

   Bidirectional Encoder Representations from Transformers (BERT) is one of the word embedding and pre trained deep bidirectional representation models that represent the contextual relationship between words in unlabeled text data. BERT utilizes transforms, which involves an encoder for reading the input text and a decoder to produce predictions of the tasks. The significant role of BERT is to pretrain the representation from unlabeled text data by mutually conditioning on both left and right context in all layers. Traditional models in fine-tuning have the disadvantage of unidirectional language models to learn general language representations and affect the pre-training architectures.
    BERT tackles such issues by performing bidirectional pre-training and achieving state-of-the-art performance in fine-tuning. Pretrain of deep bidirectional transformer is permitted by masked language model (MLM) in BERT that enables the representation to combine both the left and right context. BERT consists of improvised pre-training and fine-tuning steps. In pre-training, the model is trained on labeled data across different pre-trained tasks. In fine-tuning, initialization of pre-trained parameters and all the parameters are fine-tuned using labeled data. Each downstream task possesses separate fine-tuning models. BERT is applied in a variety of NLP tasks such as Text Classification or Sentence Classification, Semantic Similarity between pairs of Sentences, Question Answering Task with paragraph, Text summarization, and many more.