Research Area:  Machine Learning
Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TACOLM, a temporal common sense language model. Our method is shown to give quality predictions of various dimensions of temporal common sense (on UDST and a newly collected dataset from RealNews). It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA (on TimeBank, HiEVE and MCTACO) that are better than using the standard BERT. Thus, it will be an important component of temporal NLP.
Keywords:  
Temporal common sense
Duration
Frequency of events
BERT
Computation and Language
Author(s) Name:  Ben Zhou, Qiang Ning, Daniel Khashabi, Dan Roth
Journal name:   Computation and Language
Conferrence name:  
Publisher name:  arXiv.2005.04304
DOI:  10.48550/arXiv.2005.04304
Volume Information:  
Paper Link:   https://arxiv.org/abs/2005.04304