Loading…

Deep Uncoupled Discrete Hashing via Similarity Matrix Decomposition

Hashing has been drawing increasing attention in the task of large-scale image retrieval owing to its storage and computation efficiency, especially the recent asymmetric deep hashing methods. These approaches treat the query and database in an asymmetric way and can take full advantage of the whole...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on multimedia computing communications and applications 2023-01, Vol.19 (1), p.1-22, Article 22
Main Authors: Wu, Dayan, Dai, Qi, Li, Bo, Wang, Weiping
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313
cites cdi_FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313
container_end_page 22
container_issue 1
container_start_page 1
container_title ACM transactions on multimedia computing communications and applications
container_volume 19
creator Wu, Dayan
Dai, Qi
Li, Bo
Wang, Weiping
description Hashing has been drawing increasing attention in the task of large-scale image retrieval owing to its storage and computation efficiency, especially the recent asymmetric deep hashing methods. These approaches treat the query and database in an asymmetric way and can take full advantage of the whole training data. Though it has achieved state-of-the-art performance, asymmetric deep hashing methods still suffer from the large quantization error and efficiency problem on large-scale datasets due to the tight coupling between the query and database. In this article, we propose a novel asymmetric hashing method, called Deep Uncoupled Discrete Hashing (DUDH), for large-scale approximate nearest neighbor search. Instead of directly preserving the similarity between the query and database, DUDH first exploits a small similarity-transfer image set to transfer the underlying semantic structures from the database to the query and implicitly keep the desired similarity. As a result, the large similarity matrix is decomposed into two relatively small ones and the query is decoupled from the database. Then both database codes and similarity-transfer codes are directly learned during optimization. The quantization error of DUDH only exists in the process of preserving similarity between the query and similarity-transfer set. By uncoupling the query from the database, the training cost of optimizing the CNN model for the query is no longer related to the size of the database. Besides, to further accelerate the training process, we propose to optimize the similarity-transfer codes with a constant-approximation solution. In doing so, the training cost of optimizing similarity-transfer codes can be almost ignored. Extensive experiments on four widely used image retrieval benchmarks demonstrate that DUDH can achieve state-of-the-art retrieval performance with remarkable training cost reduction (30× - 50× relative).
doi_str_mv 10.1145/3524021
format article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3524021</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3524021</sourcerecordid><originalsourceid>FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313</originalsourceid><addsrcrecordid>eNo9jztPwzAUhS0EEqUgdiZvTKF-xE4yooRSpFYM0Dm6dq7BKC_ZAdF_D6il0znS-XSkj5Brzu44T9VCKpEywU_IjCvFE51rdXrsKjsnFzF-MCa1SvWMlBXiSLe9HT7HFhta-WgDTkhXEN99_0a_PNAX3_kWgp92dANT8N-0Qjt04xD95If-kpw5aCNeHXJOtsuH13KVrJ8fn8r7dQKCZVOCnGVQpA4KxVyRG5E3roBGphlYA9a6xjqLhTQWRM4yZaSUQmtUzDADkss5ud3_2jDEGNDVY_AdhF3NWf3nXh_cf8mbPQm2O0L_4w-H8FR2</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Uncoupled Discrete Hashing via Similarity Matrix Decomposition</title><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Wu, Dayan ; Dai, Qi ; Li, Bo ; Wang, Weiping</creator><creatorcontrib>Wu, Dayan ; Dai, Qi ; Li, Bo ; Wang, Weiping</creatorcontrib><description>Hashing has been drawing increasing attention in the task of large-scale image retrieval owing to its storage and computation efficiency, especially the recent asymmetric deep hashing methods. These approaches treat the query and database in an asymmetric way and can take full advantage of the whole training data. Though it has achieved state-of-the-art performance, asymmetric deep hashing methods still suffer from the large quantization error and efficiency problem on large-scale datasets due to the tight coupling between the query and database. In this article, we propose a novel asymmetric hashing method, called Deep Uncoupled Discrete Hashing (DUDH), for large-scale approximate nearest neighbor search. Instead of directly preserving the similarity between the query and database, DUDH first exploits a small similarity-transfer image set to transfer the underlying semantic structures from the database to the query and implicitly keep the desired similarity. As a result, the large similarity matrix is decomposed into two relatively small ones and the query is decoupled from the database. Then both database codes and similarity-transfer codes are directly learned during optimization. The quantization error of DUDH only exists in the process of preserving similarity between the query and similarity-transfer set. By uncoupling the query from the database, the training cost of optimizing the CNN model for the query is no longer related to the size of the database. Besides, to further accelerate the training process, we propose to optimize the similarity-transfer codes with a constant-approximation solution. In doing so, the training cost of optimizing similarity-transfer codes can be almost ignored. Extensive experiments on four widely used image retrieval benchmarks demonstrate that DUDH can achieve state-of-the-art retrieval performance with remarkable training cost reduction (30× - 50× relative).</description><identifier>ISSN: 1551-6857</identifier><identifier>EISSN: 1551-6865</identifier><identifier>DOI: 10.1145/3524021</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Image search ; Information systems</subject><ispartof>ACM transactions on multimedia computing communications and applications, 2023-01, Vol.19 (1), p.1-22, Article 22</ispartof><rights>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313</citedby><cites>FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Wu, Dayan</creatorcontrib><creatorcontrib>Dai, Qi</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Wang, Weiping</creatorcontrib><title>Deep Uncoupled Discrete Hashing via Similarity Matrix Decomposition</title><title>ACM transactions on multimedia computing communications and applications</title><addtitle>ACM TOMM</addtitle><description>Hashing has been drawing increasing attention in the task of large-scale image retrieval owing to its storage and computation efficiency, especially the recent asymmetric deep hashing methods. These approaches treat the query and database in an asymmetric way and can take full advantage of the whole training data. Though it has achieved state-of-the-art performance, asymmetric deep hashing methods still suffer from the large quantization error and efficiency problem on large-scale datasets due to the tight coupling between the query and database. In this article, we propose a novel asymmetric hashing method, called Deep Uncoupled Discrete Hashing (DUDH), for large-scale approximate nearest neighbor search. Instead of directly preserving the similarity between the query and database, DUDH first exploits a small similarity-transfer image set to transfer the underlying semantic structures from the database to the query and implicitly keep the desired similarity. As a result, the large similarity matrix is decomposed into two relatively small ones and the query is decoupled from the database. Then both database codes and similarity-transfer codes are directly learned during optimization. The quantization error of DUDH only exists in the process of preserving similarity between the query and similarity-transfer set. By uncoupling the query from the database, the training cost of optimizing the CNN model for the query is no longer related to the size of the database. Besides, to further accelerate the training process, we propose to optimize the similarity-transfer codes with a constant-approximation solution. In doing so, the training cost of optimizing similarity-transfer codes can be almost ignored. Extensive experiments on four widely used image retrieval benchmarks demonstrate that DUDH can achieve state-of-the-art retrieval performance with remarkable training cost reduction (30× - 50× relative).</description><subject>Image search</subject><subject>Information systems</subject><issn>1551-6857</issn><issn>1551-6865</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9jztPwzAUhS0EEqUgdiZvTKF-xE4yooRSpFYM0Dm6dq7BKC_ZAdF_D6il0znS-XSkj5Brzu44T9VCKpEywU_IjCvFE51rdXrsKjsnFzF-MCa1SvWMlBXiSLe9HT7HFhta-WgDTkhXEN99_0a_PNAX3_kWgp92dANT8N-0Qjt04xD95If-kpw5aCNeHXJOtsuH13KVrJ8fn8r7dQKCZVOCnGVQpA4KxVyRG5E3roBGphlYA9a6xjqLhTQWRM4yZaSUQmtUzDADkss5ud3_2jDEGNDVY_AdhF3NWf3nXh_cf8mbPQm2O0L_4w-H8FR2</recordid><startdate>20230131</startdate><enddate>20230131</enddate><creator>Wu, Dayan</creator><creator>Dai, Qi</creator><creator>Li, Bo</creator><creator>Wang, Weiping</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20230131</creationdate><title>Deep Uncoupled Discrete Hashing via Similarity Matrix Decomposition</title><author>Wu, Dayan ; Dai, Qi ; Li, Bo ; Wang, Weiping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Image search</topic><topic>Information systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Dayan</creatorcontrib><creatorcontrib>Dai, Qi</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Wang, Weiping</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on multimedia computing communications and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Dayan</au><au>Dai, Qi</au><au>Li, Bo</au><au>Wang, Weiping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Uncoupled Discrete Hashing via Similarity Matrix Decomposition</atitle><jtitle>ACM transactions on multimedia computing communications and applications</jtitle><stitle>ACM TOMM</stitle><date>2023-01-31</date><risdate>2023</risdate><volume>19</volume><issue>1</issue><spage>1</spage><epage>22</epage><pages>1-22</pages><artnum>22</artnum><issn>1551-6857</issn><eissn>1551-6865</eissn><abstract>Hashing has been drawing increasing attention in the task of large-scale image retrieval owing to its storage and computation efficiency, especially the recent asymmetric deep hashing methods. These approaches treat the query and database in an asymmetric way and can take full advantage of the whole training data. Though it has achieved state-of-the-art performance, asymmetric deep hashing methods still suffer from the large quantization error and efficiency problem on large-scale datasets due to the tight coupling between the query and database. In this article, we propose a novel asymmetric hashing method, called Deep Uncoupled Discrete Hashing (DUDH), for large-scale approximate nearest neighbor search. Instead of directly preserving the similarity between the query and database, DUDH first exploits a small similarity-transfer image set to transfer the underlying semantic structures from the database to the query and implicitly keep the desired similarity. As a result, the large similarity matrix is decomposed into two relatively small ones and the query is decoupled from the database. Then both database codes and similarity-transfer codes are directly learned during optimization. The quantization error of DUDH only exists in the process of preserving similarity between the query and similarity-transfer set. By uncoupling the query from the database, the training cost of optimizing the CNN model for the query is no longer related to the size of the database. Besides, to further accelerate the training process, we propose to optimize the similarity-transfer codes with a constant-approximation solution. In doing so, the training cost of optimizing similarity-transfer codes can be almost ignored. Extensive experiments on four widely used image retrieval benchmarks demonstrate that DUDH can achieve state-of-the-art retrieval performance with remarkable training cost reduction (30× - 50× relative).</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3524021</doi><tpages>22</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1551-6857
ispartof ACM transactions on multimedia computing communications and applications, 2023-01, Vol.19 (1), p.1-22, Article 22
issn 1551-6857
1551-6865
language eng
recordid cdi_crossref_primary_10_1145_3524021
source Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
subjects Image search
Information systems
title Deep Uncoupled Discrete Hashing via Similarity Matrix Decomposition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T17%3A14%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Uncoupled%20Discrete%20Hashing%20via%20Similarity%20Matrix%20Decomposition&rft.jtitle=ACM%20transactions%20on%20multimedia%20computing%20communications%20and%20applications&rft.au=Wu,%20Dayan&rft.date=2023-01-31&rft.volume=19&rft.issue=1&rft.spage=1&rft.epage=22&rft.pages=1-22&rft.artnum=22&rft.issn=1551-6857&rft.eissn=1551-6865&rft_id=info:doi/10.1145/3524021&rft_dat=%3Cacm_cross%3E3524021%3C/acm_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a207t-e107a94fa950f98b28df9ad347acbaccfdcfce93bca28075b333266e50b0ba313%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true