Research Area:  Machine Learning
Person re-identification (re-ID) remains challenging in a real-world scenario, as it requires a trained network to generalise to totally unseen target data in the presence of variations across domains. Recently, generative adversarial models have been widely adopted to enhance the diversity of training data. These approaches, however, often fail to generalise to other domains, as existing generative person re-identification models have a disconnect between the generative component and the discriminative feature learning stage. To address the on-going challenges regarding model generalisation, we propose an end-to-end domain adaptive attention network to jointly translate images between domains and learn discriminative re-id features in a single framework. To address the domain gap challenge, we introduce an attention module for image translation from source to target domains without affecting the identity of a person. More specifically, attention is directed to the background instead of the entire image of the person, ensuring identifying characteristics of the subject are preserved. The proposed joint learning network results in a significant performance improvement over state-of-the-art methods on several challenging benchmark datasets.
Author(s) Name:  Amena Khatun; Simon Denman; Sridha Sridharan; Clinton Fookes
Journal name:  IEEE Transactions on Information Forensics and Security
Publisher name:  IEEE
Volume Information:  ( Volume: 16) Page(s): 3803 - 3813
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9454510