Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Vilbert:Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks - 2019

Vilbert:Pretraining Task-Agnostic Visiolinguistic Representations For Vision-And-Language Tasks

Research Area:  Machine Learning

Abstract:

We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.

Keywords:  

Author(s) Name:  Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee

Journal name:  Computer Science

Conferrence name:  

Publisher name:  arXiv:1908.02265

DOI:  10.48550/arXiv.1908.02265 Focus to learn more

Volume Information: