Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Research Topics in Deep Contextual Word Embedding Models for Semantic Similarity

Research Topics in Deep Contextual Word Embedding Models for Semantic Similarity

   In semantic similarity, discovering semantically similar words with the help of word embeddings yields better results. Word embeddings are the vector representation and are dominantly used in many natural language processing (NLP) tasks. Word embeddings utilize neural networks to generate numerical representation and discover the best model that captures the semantic relationship between the words. Deep contextual word embedding models capture the word semantics in the context that represents the sentences with the same words and different meanings using deep neural networks. Deep contextual word embedding models learn sequence-level semantics by considering the sequence of all words in the documents.
   Word embedding models for semantic similarity also determine the similarity between texts of different languages by mapping the word embedding one language over another. Some word embedding models used in semantic similarity are word2vec, Global Vectors for Word Representation(GloVe), fastText, Bidirectional Encoder Representations from Transformers(BERT), Continuous bowl of words(CBOW), Embeddings from Language Models (ELMO), and Skip-gram models. The efficiency of word embeddings plays a significant role in the performance of the semantic similarity methods.