Loading…

ICLR: Instance Credibility-Based Label Refinement for label noisy person re-identification

Person re-identification (Re-ID) has demonstrated remarkable performance when trained on accurately annotated data. However, in practical applications, the presence of annotation errors is unavoidable, which can undermine the accuracy and robustness of the Re-ID model training. To address the advers...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2024-04, Vol.148, p.110168, Article 110168
Main Authors: Zhong, Xian, Han, Xiyu, Jia, Xuemei, Huang, Wenxin, Liu, Wenxuan, Su, Shuaipeng, Yu, Xiaohan, Ye, Mang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Person re-identification (Re-ID) has demonstrated remarkable performance when trained on accurately annotated data. However, in practical applications, the presence of annotation errors is unavoidable, which can undermine the accuracy and robustness of the Re-ID model training. To address the adverse impacts of label noise, especially in scenarios with limited training samples for each identity (ID), a common approach is to utilize all the available sample labels. Unfortunately, these labels contain incorrect labels, leading to the model being influenced by noise and compromising its performance. In this paper, we propose an Instance Credibility-based Label Refinement and Re-weighting (ICLR) framework to exploit partially credible labels to refine and re-weight incredible labels effectively. Specifically, the Label-Incredibility Optimization (LIO) module is proposed to optimize incredible labels before model training, which partitions the samples into credible and incredible samples and propagates credible labels to others. Furthermore, we design an Incredible Instance Re-weight (I2R) strategy, aiming to emphasize instances that contribute more significantly and dynamically adjust the weight of each instance. The proposed method seamlessly reinforces accuracy without requiring additional information or discarding any samples. Extensive experimental results conducted on Market-1501 and Duke-MTMC datasets demonstrate the effectiveness of our proposed method, leading to a substantial improvement in performance under both random noise and pattern noise settings. Code will be available at https://github.com/whut16/ReID-Label-Noise. •Inevitable label noise affects the performance of Re-ID.•All samples are partitioned and optimized before training, emphasizing the cleanliness of the data.•Dynamically adjusting the weight of each instance fosters the reuse and re-weighting of all available samples.•The improvement achieved by our proposal under random and pattern noise is noteworthy.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.110168