Loading…

Cross adversarial consistency self-prediction learning for unsupervised domain adaptation person re-identification

Domain-invariant feature-extraction has become very popular for unsupervised domain adaptation (UDA) person re-identification (Re-ID). However, most methods using it are limited by weak discrimination of learned domain-invariant features. To solve this problem, we develop a new approach: cross-adver...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences 2021-06, Vol.559, p.46-60
Main Authors: Li, Huafeng, Pang, Jian, Tao, Dapeng, Yu, Zhengtao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Domain-invariant feature-extraction has become very popular for unsupervised domain adaptation (UDA) person re-identification (Re-ID). However, most methods using it are limited by weak discrimination of learned domain-invariant features. To solve this problem, we develop a new approach: cross-adversarial consistency self-prediction learning. Cross-adversarial consistency is used to endow the learned feature with domain invariance and discrimination; consistency self-prediction fine-tunes the pre-trained model by selecting non-paired samples from target data. First, the camera views of source domain are randomly divided into two groups with their samples. Then, the two identifiers are used crosswise on both groups, forcing consistent results through adversarial learning between the identifiers and the feature encoder. To refine the model, a self-prediction mechanism is introduced that conservatively selects target domain samples with high identity similarities to labeled source domain samples. This practical design helps to alleviate domain bias between the source and target domains. The results of experiments conducted on five benchmark datasets verify that the proposed method is effective and outperforms state-of-the-art competitors. The source code of our method is available at https://github.com/PangJian123/CAC-CSP.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2021.01.016