Loading…

Cross-Modal Adaptive Dual Association for Text-to-Image Person Retrieval

Text-to-image person re-identification (ReID) aims to retrieve images of a person based on a given textual description. The key challenge is to learn the relations between detailed information from visual and textual modalities. Existing work focuses on learning a latent space to narrow the modality...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on multimedia 2024, Vol.26, p.6609-6620
Main Authors: Lin, Dixuan, Peng, Yi-Xing, Meng, Jingke, Zheng, Wei-Shi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Text-to-image person re-identification (ReID) aims to retrieve images of a person based on a given textual description. The key challenge is to learn the relations between detailed information from visual and textual modalities. Existing work focuses on learning a latent space to narrow the modality gap and further build local correspondences between two modalities. However, these methods assume that image-to-text and text-to-image associations are modality-agnostic, resulting in suboptimal associations. In this work, we demonstrate the discrepancy between image-to-text association and text-to-image association and proposecross-modal adaptive dual association (CADA) to build fine bidirectional image-text detailed associations. Our approach features a decoder-based adaptive dual association module that enables full interaction between visual and textual modalities, enabling bidirectional and adaptive cross-modal correspondence associations. Specifically, this paper proposes a bidirectional association mechanism: Association of text Tokens to image Patches (ATP) and Association of image Regions to text Attributes (ARA). We adaptively model the ATP based on the fact that aggregating cross-modal features based on mistaken associations will lead to feature distortion. For modeling the ARA, since attributes are typically the first distinguishing cues of a person, we explore attribute-level associations by predicting the masked text phrase using the related image region. Finally, we learn the dual associations between texts and images, and the experimental results demonstrate the superiority of our dual formulation.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2024.3355644