Loading…
SCH: Symmetric Consistent Hashing for cross-modal retrieval
When handling large-scale multimodal data, hashing-based retrieval methods have gained significant attention due to their advantages in terms of low storage consumption and quick query speed. Despite the good performance of supervised hashing techniques, there are still certain limitations that need...
Saved in:
Published in: | Signal processing 2024-02, Vol.215, p.109255, Article 109255 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | When handling large-scale multimodal data, hashing-based retrieval methods have gained significant attention due to their advantages in terms of low storage consumption and quick query speed. Despite the good performance of supervised hashing techniques, there are still certain limitations that need to be addressed. Firstly, they ignore valuable information such as reconstruction residuals when employing matrix factorization to learn the low-dimensional space. Secondly, most methods map heterogeneous data to the same feature space, which is unreasonable because different modalities possess distinct dimensions and distributions. Lastly, some methods relax constraints in order to optimize discrete hash codes, leading to considerable quantization errors. To solve the above problems, a novel two-step hashing-based technology called Symmetric Consistent Hashing (SCH) is proposed. Specifically, in the first step, two separate latent semantic spaces are learned by leveraging Principal Component Analysis to mitigate principal energy loss and Locality Preserving Projections to preserve local similarity. Then, the latent spaces of different modalities are aligned to facilitate the sharing of high-level semantic information by exploiting modality consistency. Next, hash codes are learned from distinct latent spaces, which are embedded in the second step for learning hash functions through Kernel Logistic Regression. To validate the effectiveness of SCH, extensive experiments on three publicly available benchmark datasets are accomplished. The results demonstrate the superiority of SCH over state-of-the-art hashing baselines, thus confirming the potency of the proposed approach.
•Preserving the reconstruction residual of data when learning two different latent semantic spaces.•Aligning the latent spaces of different modalities by exploiting the consistency among modalities.•Implementing a large number of experiments on three public datasets and obtaining the best results. |
---|---|
ISSN: | 0165-1684 1872-7557 |
DOI: | 10.1016/j.sigpro.2023.109255 |