Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

LMMS reloaded:Transformer-based sense embeddings for disambiguation and beyond - 2022

Lmms Reloaded:Transformer-Based Sense Embeddings For Disambiguation And Beyond

Research Paper on Lmms Reloaded:Transformer-Based Sense Embeddings For Disambiguation And Beyond

Research Area:  Machine Learning

Abstract:

Distributional semantics based on neural approaches is a cornerstone of Natural Language Processing, with surprising connections to human meaning representation as well. Recent Transformer-based Language Models have proven capable of producing contextual word representations that reliably convey sense-specific information, simply as a product of self-supervision. Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task. Still, there remains much to understand on how to use these Neural Language Models (NLMs) to produce sense embeddings that can better harness each NLMs meaning representation abilities. In this work we introduce a more principled approach to leverage information from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We also emphasize the versatility of these sense embeddings in contrast to task-specific models, applying them on several sense-related tasks, besides WSD, while demonstrating improved performance using our proposed approach over prior work focused on sense embeddings. Finally, we discuss unexpected findings regarding layer and model performance variations, and potential applications for downstream tasks.

Keywords:  
LMMS reloaded
Transformer
Word Sense Disambiguation (WSD)
Neural Language Models
Deep Learning
Machine Learning

Author(s) Name:  Daniel Loureiro, Alípio Mário Jorge, Jose Camacho-Collados

Journal name:  Artificial Intelligence

Conferrence name:  

Publisher name:  Elsevier

DOI:  10.1016/j.artint.2022.103661

Volume Information:   Volume 305, April 2022, 103661