Loading…
Fine-grained Learning for Visible-Infrared Person Re-identification
Visible-Infrared Person Re-identification aims to retrieve specific identities from different modalities. In order to relieve the modality discrepancy, previous works mainly concentrate on aligning the distribution of high-level features, while disregarding the exploration of fine-grained informatio...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Visible-Infrared Person Re-identification aims to retrieve specific identities from different modalities. In order to relieve the modality discrepancy, previous works mainly concentrate on aligning the distribution of high-level features, while disregarding the exploration of fine-grained information. In this paper, we propose a novel Fine-grained Information Exploration Network (FIENet) to implement discriminative representation, further alleviating the modality discrepancy. Firstly, we propose a Progressive Feature Aggregation Module (PFAM) to progressively aggregate mid-level features, and a Multi-Perception Interaction Module (MPIM) to achieve the interaction with diverse perceptions. Additionally, combined with PFAM and MPIM, more fine-grained information can be extracted, which is beneficial for FIENet to focus on discriminative human parts in both modalities effectively. Secondly, in terms of the feature center, we introduce an Identity-Guided Center Loss (IGCL) to supervise identity representation with intra-identity and inter-identity information. Finally, extensive experiments are conducted to demonstrate that our method achieves state-of-the-art performance. |
---|---|
ISSN: | 1945-788X |
DOI: | 10.1109/ICME55011.2023.00412 |