Loading…
Unsupervised Multiview Distributed Hashing for Large-Scale Retrieval
Multi-view hashing (MvH) learns compact hash code by efficiently integrating multi-view data, and has achieved promising performance in large-scale retrieval task. In real-world applications, multi-view data is often stored or collected in different locations, and learning hash code in such case is...
Saved in:
Published in: | IEEE transactions on circuits and systems for video technology 2022-12, Vol.32 (12), p.8837-8848 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813 |
---|---|
cites | cdi_FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813 |
container_end_page | 8848 |
container_issue | 12 |
container_start_page | 8837 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 32 |
creator | Shen, Xiaobo Tang, Yunpeng Zheng, Yuhui Yuan, Yun-Hao Sun, Quan-Sen |
description | Multi-view hashing (MvH) learns compact hash code by efficiently integrating multi-view data, and has achieved promising performance in large-scale retrieval task. In real-world applications, multi-view data is often stored or collected in different locations, and learning hash code in such case is more challenging yet less studied. In addition, unsupervised MvHs hardly achieve impressive retrieval performance due to absence of supervision. To fulfill this gap, this paper introduces a novel unsupervised multi-view distributed hashing (UMvDisH) to learn hash code from multi-view data, which is distributed in different nodes of a network. UMvDisH jointly performs latent factor model and spectral clustering to generate latent hash code and pseudo label respectively in each node. The consistency between hash code and pseudo label improves discrimination of hash code. The proposed distributed learning problem is divided into a set of decentralized subproblems by imposing local consistency among neighbor nodes. As such, the subproblems can be solved in parallel, and training time can be reduced. The communication cost is low due to no exchange of training data. Experimental results on four benchmark image datasets including a very large-scale image dataset show that UMvDisH achieves comparable retrieval performance and trains faster than state-of-the-art unsupervised MvHs in the distributed setting. |
doi_str_mv | 10.1109/TCSVT.2022.3197849 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2747612086</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9853606</ieee_id><sourcerecordid>2747612086</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsFb_gF4CnlN3drNfR2nVChXBtl6XTTqpW2JTd5OI_97UFk_zMjzvDDyEXAMdAVBztxjP3xcjRhkbcTBKZ-aEDEAInTJGxWmfqYBUMxDn5CLGDaWQ6UwNyGS5je0OQ-cjrpKXtmp85_E7mfjYBJ-3Tb-duvjht-ukrEMyc2GN6bxwFSZv2CPYueqSnJWuinh1nEOyfHxYjKfp7PXpeXw_SwvGRJM6LSUrCo1SYMkzl3OuWG6M1IqXwHMpKTAwQkIpcqlhZYTIs1UGCjjNNfAhuT3c3YX6q8XY2E3dhm3_0jKVKQmMatlT7EAVoY4xYGl3wX-68GOB2r0t-2fL7m3Zo62-dHMoeUT8LxgtuKSS_wIJVmTe</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2747612086</pqid></control><display><type>article</type><title>Unsupervised Multiview Distributed Hashing for Large-Scale Retrieval</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Shen, Xiaobo ; Tang, Yunpeng ; Zheng, Yuhui ; Yuan, Yun-Hao ; Sun, Quan-Sen</creator><creatorcontrib>Shen, Xiaobo ; Tang, Yunpeng ; Zheng, Yuhui ; Yuan, Yun-Hao ; Sun, Quan-Sen</creatorcontrib><description>Multi-view hashing (MvH) learns compact hash code by efficiently integrating multi-view data, and has achieved promising performance in large-scale retrieval task. In real-world applications, multi-view data is often stored or collected in different locations, and learning hash code in such case is more challenging yet less studied. In addition, unsupervised MvHs hardly achieve impressive retrieval performance due to absence of supervision. To fulfill this gap, this paper introduces a novel unsupervised multi-view distributed hashing (UMvDisH) to learn hash code from multi-view data, which is distributed in different nodes of a network. UMvDisH jointly performs latent factor model and spectral clustering to generate latent hash code and pseudo label respectively in each node. The consistency between hash code and pseudo label improves discrimination of hash code. The proposed distributed learning problem is divided into a set of decentralized subproblems by imposing local consistency among neighbor nodes. As such, the subproblems can be solved in parallel, and training time can be reduced. The communication cost is low due to no exchange of training data. Experimental results on four benchmark image datasets including a very large-scale image dataset show that UMvDisH achieves comparable retrieval performance and trains faster than state-of-the-art unsupervised MvHs in the distributed setting.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3197849</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Clustering ; Codes ; Computer aided instruction ; Consistency ; Costs ; Datasets ; Distance learning ; Distributed databases ; distributed learning ; Hash functions ; Hashing ; Learning ; multi-view data ; Nodes ; Retrieval ; Training ; Training data</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-12, Vol.32 (12), p.8837-8848</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813</citedby><cites>FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9853606$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,54794</link.rule.ids></links><search><creatorcontrib>Shen, Xiaobo</creatorcontrib><creatorcontrib>Tang, Yunpeng</creatorcontrib><creatorcontrib>Zheng, Yuhui</creatorcontrib><creatorcontrib>Yuan, Yun-Hao</creatorcontrib><creatorcontrib>Sun, Quan-Sen</creatorcontrib><title>Unsupervised Multiview Distributed Hashing for Large-Scale Retrieval</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Multi-view hashing (MvH) learns compact hash code by efficiently integrating multi-view data, and has achieved promising performance in large-scale retrieval task. In real-world applications, multi-view data is often stored or collected in different locations, and learning hash code in such case is more challenging yet less studied. In addition, unsupervised MvHs hardly achieve impressive retrieval performance due to absence of supervision. To fulfill this gap, this paper introduces a novel unsupervised multi-view distributed hashing (UMvDisH) to learn hash code from multi-view data, which is distributed in different nodes of a network. UMvDisH jointly performs latent factor model and spectral clustering to generate latent hash code and pseudo label respectively in each node. The consistency between hash code and pseudo label improves discrimination of hash code. The proposed distributed learning problem is divided into a set of decentralized subproblems by imposing local consistency among neighbor nodes. As such, the subproblems can be solved in parallel, and training time can be reduced. The communication cost is low due to no exchange of training data. Experimental results on four benchmark image datasets including a very large-scale image dataset show that UMvDisH achieves comparable retrieval performance and trains faster than state-of-the-art unsupervised MvHs in the distributed setting.</description><subject>Clustering</subject><subject>Codes</subject><subject>Computer aided instruction</subject><subject>Consistency</subject><subject>Costs</subject><subject>Datasets</subject><subject>Distance learning</subject><subject>Distributed databases</subject><subject>distributed learning</subject><subject>Hash functions</subject><subject>Hashing</subject><subject>Learning</subject><subject>multi-view data</subject><subject>Nodes</subject><subject>Retrieval</subject><subject>Training</subject><subject>Training data</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kE1Lw0AQhhdRsFb_gF4CnlN3drNfR2nVChXBtl6XTTqpW2JTd5OI_97UFk_zMjzvDDyEXAMdAVBztxjP3xcjRhkbcTBKZ-aEDEAInTJGxWmfqYBUMxDn5CLGDaWQ6UwNyGS5je0OQ-cjrpKXtmp85_E7mfjYBJ-3Tb-duvjht-ukrEMyc2GN6bxwFSZv2CPYueqSnJWuinh1nEOyfHxYjKfp7PXpeXw_SwvGRJM6LSUrCo1SYMkzl3OuWG6M1IqXwHMpKTAwQkIpcqlhZYTIs1UGCjjNNfAhuT3c3YX6q8XY2E3dhm3_0jKVKQmMatlT7EAVoY4xYGl3wX-68GOB2r0t-2fL7m3Zo62-dHMoeUT8LxgtuKSS_wIJVmTe</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Shen, Xiaobo</creator><creator>Tang, Yunpeng</creator><creator>Zheng, Yuhui</creator><creator>Yuan, Yun-Hao</creator><creator>Sun, Quan-Sen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20221201</creationdate><title>Unsupervised Multiview Distributed Hashing for Large-Scale Retrieval</title><author>Shen, Xiaobo ; Tang, Yunpeng ; Zheng, Yuhui ; Yuan, Yun-Hao ; Sun, Quan-Sen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Clustering</topic><topic>Codes</topic><topic>Computer aided instruction</topic><topic>Consistency</topic><topic>Costs</topic><topic>Datasets</topic><topic>Distance learning</topic><topic>Distributed databases</topic><topic>distributed learning</topic><topic>Hash functions</topic><topic>Hashing</topic><topic>Learning</topic><topic>multi-view data</topic><topic>Nodes</topic><topic>Retrieval</topic><topic>Training</topic><topic>Training data</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shen, Xiaobo</creatorcontrib><creatorcontrib>Tang, Yunpeng</creatorcontrib><creatorcontrib>Zheng, Yuhui</creatorcontrib><creatorcontrib>Yuan, Yun-Hao</creatorcontrib><creatorcontrib>Sun, Quan-Sen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shen, Xiaobo</au><au>Tang, Yunpeng</au><au>Zheng, Yuhui</au><au>Yuan, Yun-Hao</au><au>Sun, Quan-Sen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Unsupervised Multiview Distributed Hashing for Large-Scale Retrieval</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>32</volume><issue>12</issue><spage>8837</spage><epage>8848</epage><pages>8837-8848</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Multi-view hashing (MvH) learns compact hash code by efficiently integrating multi-view data, and has achieved promising performance in large-scale retrieval task. In real-world applications, multi-view data is often stored or collected in different locations, and learning hash code in such case is more challenging yet less studied. In addition, unsupervised MvHs hardly achieve impressive retrieval performance due to absence of supervision. To fulfill this gap, this paper introduces a novel unsupervised multi-view distributed hashing (UMvDisH) to learn hash code from multi-view data, which is distributed in different nodes of a network. UMvDisH jointly performs latent factor model and spectral clustering to generate latent hash code and pseudo label respectively in each node. The consistency between hash code and pseudo label improves discrimination of hash code. The proposed distributed learning problem is divided into a set of decentralized subproblems by imposing local consistency among neighbor nodes. As such, the subproblems can be solved in parallel, and training time can be reduced. The communication cost is low due to no exchange of training data. Experimental results on four benchmark image datasets including a very large-scale image dataset show that UMvDisH achieves comparable retrieval performance and trains faster than state-of-the-art unsupervised MvHs in the distributed setting.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3197849</doi><tpages>12</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2022-12, Vol.32 (12), p.8837-8848 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_proquest_journals_2747612086 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Clustering Codes Computer aided instruction Consistency Costs Datasets Distance learning Distributed databases distributed learning Hash functions Hashing Learning multi-view data Nodes Retrieval Training Training data |
title | Unsupervised Multiview Distributed Hashing for Large-Scale Retrieval |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T13%3A43%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Unsupervised%20Multiview%20Distributed%20Hashing%20for%20Large-Scale%20Retrieval&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Shen,%20Xiaobo&rft.date=2022-12-01&rft.volume=32&rft.issue=12&rft.spage=8837&rft.epage=8848&rft.pages=8837-8848&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3197849&rft_dat=%3Cproquest_ieee_%3E2747612086%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c225t-a8662cc8e65ef34ab3372b996873f13b6601219561f5b681d955b4d417130b813%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2747612086&rft_id=info:pmid/&rft_ieee_id=9853606&rfr_iscdi=true |