List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

Adversarial Multiview Clustering Networks With Adaptive Fusion - 2023

adversarial-multiview-clustering-networks-with-adaptive-fusion.jpg

Adversarial Multiview Clustering Networks With Adaptive Fusion | S-Logix

Research Area:  Machine Learning

Abstract:

The existing deep multiview clustering (MVC) methods are mainly based on autoencoder networks, which seek common latent variables to reconstruct the original input of each view individually. However, due to the view-specific reconstruction loss, it is challenging to extract consistent latent representations over multiple views for clustering. To address this challenge, we propose adversarial MVC (AMvC) networks in this article. The proposed AMvC generates each views samples conditioning on the fused latent representations among different views to encourage a more consistent clustering structure. Specifically, multiview encoders are used to extract latent descriptions from all the views, and the corresponding generators are used to generate the reconstructed samples. The discriminative networks and the mean squared loss are jointly utilized for training the multiview encoders and generators to balance the distinctness and consistency of each views latent representation. Moreover, an adaptive fusion layer is developed to obtain a shared latent representation, on which a clustering loss and the ℓ1,2 -norm constraint are further imposed to improve clustering performance and distinguish the latent space. Experimental results on video, image, and text datasets demonstrate that the effectiveness of our AMvC is over several state-of-the-art deep MVC methods.

Keywords:  
Feature extraction
Image reconstruction
Generators
Data models
Clustering algorithms
Training
Representation learning

Author(s) Name:  Qianqian Wang; Zhiqiang Tao; Wei Xia

Journal name:  IEEE Transactions on Neural Networks and Learning Systems

Conferrence name:  

Publisher name:  IEEE

DOI:  10.1109/TNNLS.2022.3145048

Volume Information:  Volume: 34