Research Area:  Machine Learning
Domain adaptation methods train a model to find similar feature representations between a source and target domain. Recent methods leverage self-supervised learning to discover the analogous representations of the two domains. However, prior self-supervised methods have three significant drawbacks: (1) leveraging pretext tasks that are susceptible to learning low-level representations, (2) aligning the two domains using adversarial loss without considering if the extracted features are low-level representations, (3) the models are not flexible to accommodate various proportions of target labels, i.e., they assume target labels are always available. This paper presents a Generic Domain Adaptation Network (GDAN) to address these issues. First, we introduce a criterion based on instance discrimination to select appropriate pretext tasks to learn high-level domain invariant representations. Then, we propose a semantic neighbor cluster to align the two domain features. The semantic neighbor cluster implements a clustering technique in a feature embedding space to form clusters according to high-level semantic similarities. Finally, we present a weighted target loss function to balance the model weights according to the target labels. This loss function makes GDAN flexible for semi-supervised scenarios, i.e., partly labeled target data. We evaluate the proposed methods on four domain adaptation benchmark datasets. The experiment findings show that the proposed methods align the two domains well and achieve competitive results.
Keywords:  
Domain adaptation
Self-supervised learning
Deep clustering
Image recognition
Pretext task
Author(s) Name:  Adu Asare Baffour,Zhen Qin,Ji Geng
Journal name:  Neurocomputing
Conferrence name:  
Publisher name:  Elsevier
DOI:  10.1016/j.neucom.2021.12.099
Volume Information:  Volume 476
Paper Link:   https://www.sciencedirect.com/science/article/pii/S092523122101955X