Research Area:  Machine Learning
We investigate the use of multimodal information contained in images as an effective method for enhancing the commonsense of Transformer models for text generation. We perform experiments using BART and T5 on concept-to-text generation, specifically the task of generative commonsense reasoning, or CommonGen. We call our approach VisCTG: Visually Grounded Concept-to-Text Generation. VisCTG involves captioning images representing appropriate everyday scenarios, and using these captions to enrich and steer the generation process. Comprehensive evaluation and analysis demonstrate that VisCTG noticeably improves model performance while successfully addressing several issues of the baseline generations, including poor commonsense, fluency, and specificity.
Keywords:  
Speech & Natural Language Processing
Machine Learning
Multimodal information
Concept-to-Text Generation
VisCTG
Author(s) Name:   Steven Y. Feng , Kevin Lu , Zhuofu Tao
Journal name:  
Conferrence name:  Proceedings of the AAAI Conference on Artificial Intelligence
Publisher name:  PKP Publishing Services Network
DOI:  10.1609/aaai.v36i10.21306
Volume Information:  
Paper Link:   https://ojs.aaai.org/index.php/AAAI/article/view/21306