Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models - 2020

Relating by contrasting: A data-efficient framework for multimodal generative models

Research paper on Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models

Research Area:  Machine Learning

Abstract:

Multimodal learning for generative models often refers to the learning of abstract concepts from the commonality of information in multiple modalities, such as vision and language. While it has proven effective for learning generalisable representations, the training of such models often requires a large amount of "related" multimodal data that shares commonality, which can be expensive to come by. To mitigate this, we develop a novel contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data. We show in experiments that our method enables data-efficient multimodal learning on challenging datasets for various multimodal VAE models. We also show that under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.

Keywords:  
Multimodal
Generative Models
Multimodal learning
Machine Learning

Author(s) Name:  Yuge Shi, Brooks Paige, Philip H.S. Torr, N. Siddharth

Journal name:  Machine Learning

Conferrence name:  

Publisher name:  arXiv:2007.01179

DOI:  10.48550/arXiv.2007.01179

Volume Information: