Loading…
Multi-level similarity learning for image-text retrieval
Image-text retrieval task has been a popular research topic and attracts a growing interest due to it bridges computer vision and natural language processing communities and involves two different modalities. Although a lot of methods have made a great progress in image-text task, it remains challen...
Saved in:
Published in: | Information processing & management 2021-01, Vol.58 (1), p.102432, Article 102432 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Image-text retrieval task has been a popular research topic and attracts a growing interest due to it bridges computer vision and natural language processing communities and involves two different modalities. Although a lot of methods have made a great progress in image-text task, it remains challenging because of the difficulty to learn the correspondence between two heterogeneous modalities. In this paper, we propose a multi-level representation learning for image-text retrieval task, which utilizes semantic-level, structural-level and contextual information to improve the quality of visual and textual representation. To utilize semantic-level information, we firstly extract the nouns, adjectives and number with high frequency as the semantic labels and adopt multi-label convolutional neural network framework to encode the semantic-level information. To explore the structure-level information of image-text pair, we firstly construct two graphs to encode the visual and textual information with respect to the corresponding modality and then, we apply graph matching with triplet loss to reduce the cross-modality discrepancy. To further improve the retrieval results, we utilize the contextual-level information from two modalities to refine the rank list and enhance the retrieval quality. Extensive experiments on Flickr30k and MSCOCO, which are two commonly datasets for image-text retrieval, have demonstrated the superiority of our proposed method. |
---|---|
ISSN: | 0306-4573 1873-5371 |
DOI: | 10.1016/j.ipm.2020.102432 |