Loading…
Visible-Infrared Cross-Modality Person Re-Identification via Adaptive Weighted Triplet Loss and Progressive Training
Visible-infrared cross-modality person re-identification (VI-ReID) aims to search the same person images across multiple non-overlapping cameras of different modalities, which has a wider application scenario than the single-modality person re-identification task. The main difficulty of VI-ReID is t...
Saved in:
Published in: | IEEE access 2024, Vol.12, p.181799-181807 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Visible-infrared cross-modality person re-identification (VI-ReID) aims to search the same person images across multiple non-overlapping cameras of different modalities, which has a wider application scenario than the single-modality person re-identification task. The main difficulty of VI-ReID is the large visual difference between the visible and infrared modalities. In this paper, an adaptive weighted triplet loss is proposed, which can adaptively adjust the weights of triplet samples. This method can reduce the impact of outlier samples, and mainly focus on the major mid-hard samples. We also introduce a channel random shuffle data augmentation method. It can be easily integrated into the existing framework. This data augmentation method can reduce the dependence on color information, and improve the robustness against color variations. A progressive training strategy is employed, which can further improve the performance. Experiments show that our proposed methods achieve state-of-the-art results on two public datasets SYSU-MM01 and RegDB without additional computation. |
---|---|
ISSN: | 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3510425 |