Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings - 2022

Collie: Continual learning of language grounding from language-image embeddings

Research paper on CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings

Research Area:  Machine Learning

Abstract:

This paper presents CoLLIE: a simple, yet effective model for continual learning of how language is grounded in vision. Given a pre-trained multimodal embedding model, where language and images are projected in the same semantic space (in this case CLIP by OpenAI), CoLLIE learns a transformation function that adjusts the language embeddings when needed to accommodate new language use. This is done by predicting the difference vector that needs to be applied, as well as a scaling factor for this vector, so that the adjustment is only applied when needed. Unlike traditional few-shot learning, the model does not just learn new classes and labels, but can also generalize to similar language use and leverage semantic compositionality. We verify the model-s performance on two different tasks of identifying the targets of referring expressions, where it has to learn new language use. The results show that the model can efficiently learn and generalize from only a few examples, with little interference with the model-s original zero-shot performance.

Keywords:  
Continual Learning
Language
Grounding
Language-Image Embeddings
pre-trained
multimodal embedding model

Author(s) Name:  Gabriel Skantze, Bram Willemsen

Journal name:  Computation and Language

Conferrence name:  

Publisher name:  arXiv:2111.07993

DOI:  10.48550/arXiv.2111.07993

Volume Information: