Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

A Systematic Investigation of Commonsense Knowledge in Large Language Models - 2022

a-systematic-investigation-of-commonsense-knowledge-in-large-language-models.jpg

A Systematic Investigation of Commonsense Knowledge in Large Language Models | S-Logix

Research Area:  Machine Learning

Abstract:

Language models (LMs) trained on large amounts of data have shown impressive performance on many NLP tasks under the zero-shot and few-shot setup. Here we aim to better understand the extent to which such models learn commonsense knowledge — a critical component of many NLP applications. We conduct a systematic and rigorous zero-shot and few-shot commonsense evaluation of large pre-trained LMs, where we: (i) carefully control for the LMs ability to exploit potential surface cues and annotation artefacts, and (ii) account for variations in performance that arise from factors that are not related to commonsense knowledge. Our findings highlight the limitations of pre-trained LMs in acquiring commonsense knowledge without task-specific supervision; furthermore, using larger models or few-shot evaluation is insufficient to achieve human-level commonsense performance.

Keywords:  
Language models
NLP tasks
NLP applications
Pre-trained LMs
Commonsense knowledge

Author(s) Name:  Xiang Lorraine Li, Adhiguna Kuncoro, Jordan Hoffmann, Cyprien de Masson d-Autume, Phil Blunsom, Aida Nematzadeh

Journal name:  Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Conferrence name:  

Publisher name:  ACL Anthology

DOI:  10.18653/v1/2022.emnlp-main.812

Volume Information: