Loading…
Global-Local Graph Convolutional Network for cross-modality person re-identification
Visible-thermal person re-identification (VT-ReID) is an important task for retrieving pedestrian between visible and thermal modality. It makes up for the drawbacks of single modality person re-identification in night surveillance applications. Most of the existing methods extract the features of d...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2021-09, Vol.452, p.137-146 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Visible-thermal person re-identification (VT-ReID) is an important task for retrieving pedestrian between visible and thermal modality. It makes up for the drawbacks of single modality person re-identification in night surveillance applications. Most of the existing methods extract the features of different images/parts independently which ignore the potential relationship between them. In this paper, we propose a novel Global-Local Graph Convolutional Network (GLGCN) to learn discriminative feature representation by modeling the relation through graph convolutional network. The local graph module builds the potential relation of different body parts within each modality to extract discriminative part-level features. The global module constructs the contextual relation of same identity across two modalities to reduce the modality discrepancy. By training the two modules jointly, the robustness of the model can be further improved. The experiment results on the SYSU-MM01 and RegDB datasets demonstrate that our model outperforms the state-of-the-art methods. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2021.04.080 |