Loading…

JRA-Net: Joint Representation Attention Network for Correspondence Learning

•We design a three layer deep learning framework for outlier rejection•We propose a novel Joint Representation Attention mechanism•We design an innovative weight function to improve the generalization ability•Experimental results show the proposed network is superior to state of the art networks. In...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2023-03, Vol.135, p.109180, Article 109180
Main Authors: Shi, Ziwei, Xiao, Guobao, Zheng, Linxin, Ma, Jiayi, Chen, Riqing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203
cites cdi_FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203
container_end_page
container_issue
container_start_page 109180
container_title Pattern recognition
container_volume 135
creator Shi, Ziwei
Xiao, Guobao
Zheng, Linxin
Ma, Jiayi
Chen, Riqing
description •We design a three layer deep learning framework for outlier rejection•We propose a novel Joint Representation Attention mechanism•We design an innovative weight function to improve the generalization ability•Experimental results show the proposed network is superior to state of the art networks. In this paper, we propose a Joint Representation Attention Network (JRA-Net), an end-to-end network, to establish reliable correspondences for image pairs. The initial correspondences generated by the local feature descriptor usually suffer from heavy outliers, which makes the network unable to learn a powerful enough representation for distinguishing inliers and outliers. To this end, we design a novel attention mechanism. The proposed attention mechanism not only takes into account the correlations between global context and geometric information, but also introduces the joint representation of different scales to suppress trivial correspondences and highlight crucial correspondences. In addition, to improve the generalization ability of attention mechanism, we present an innovative weight function, to effectively adjust the importance of the attention mechanism in a learning manner. Finally, by combining the above components, the proposed JRA-Net is able to effectively infer the probabilities of correspondences being inliers. Empirical experiments on challenging datasets demonstrate the effectiveness and generalization of JRA-Net. We achieve remarkable improvements compared with the current state-of-the-art approaches on outlier rejection and relative pose estimation.
doi_str_mv 10.1016/j.patcog.2022.109180
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_patcog_2022_109180</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0031320322006598</els_id><sourcerecordid>S0031320322006598</sourcerecordid><originalsourceid>FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203</originalsourceid><addsrcrecordid>eNp9kNtKxDAQhoMouK6-gRd9ga6T9JDWC6EUT2tRWPQ6pOl0yapJSYLi25tSr72amX8OzP8RcklhQ4GWV4fNJIOy-w0DxqJU0wqOyIpWPEsLmrNjsgLIaJoxyE7JmfcHAMpjY0WetrsmfcZwnWytNiHZ4eTQowkyaGuSJoSYz1mc-bbuPRmtS1rr4tBkzYBGYdKhdEab_Tk5GeWHx4u_uCZvd7ev7UPavdw_tk2XKpaVIeWjQpAq77GgfVFizsuqh6HneayxlogFqrrKewYAkmeMq_hqVJhSnEYLa5Ivd5Wz3jscxeT0p3Q_goKYgYiDWICIGYhYgMS1m2UN429fGp3wSs8GBu1QBTFY_f-BX1gPa5w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>JRA-Net: Joint Representation Attention Network for Correspondence Learning</title><source>Elsevier</source><creator>Shi, Ziwei ; Xiao, Guobao ; Zheng, Linxin ; Ma, Jiayi ; Chen, Riqing</creator><creatorcontrib>Shi, Ziwei ; Xiao, Guobao ; Zheng, Linxin ; Ma, Jiayi ; Chen, Riqing</creatorcontrib><description>•We design a three layer deep learning framework for outlier rejection•We propose a novel Joint Representation Attention mechanism•We design an innovative weight function to improve the generalization ability•Experimental results show the proposed network is superior to state of the art networks. In this paper, we propose a Joint Representation Attention Network (JRA-Net), an end-to-end network, to establish reliable correspondences for image pairs. The initial correspondences generated by the local feature descriptor usually suffer from heavy outliers, which makes the network unable to learn a powerful enough representation for distinguishing inliers and outliers. To this end, we design a novel attention mechanism. The proposed attention mechanism not only takes into account the correlations between global context and geometric information, but also introduces the joint representation of different scales to suppress trivial correspondences and highlight crucial correspondences. In addition, to improve the generalization ability of attention mechanism, we present an innovative weight function, to effectively adjust the importance of the attention mechanism in a learning manner. Finally, by combining the above components, the proposed JRA-Net is able to effectively infer the probabilities of correspondences being inliers. Empirical experiments on challenging datasets demonstrate the effectiveness and generalization of JRA-Net. We achieve remarkable improvements compared with the current state-of-the-art approaches on outlier rejection and relative pose estimation.</description><identifier>ISSN: 0031-3203</identifier><identifier>EISSN: 1873-5142</identifier><identifier>DOI: 10.1016/j.patcog.2022.109180</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Attention mechanism ; Correspondences ; Joint representation ; Outlier rejection ; Pose estimation</subject><ispartof>Pattern recognition, 2023-03, Vol.135, p.109180, Article 109180</ispartof><rights>2022 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203</citedby><cites>FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203</cites><orcidid>0000-0003-2928-8100</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Shi, Ziwei</creatorcontrib><creatorcontrib>Xiao, Guobao</creatorcontrib><creatorcontrib>Zheng, Linxin</creatorcontrib><creatorcontrib>Ma, Jiayi</creatorcontrib><creatorcontrib>Chen, Riqing</creatorcontrib><title>JRA-Net: Joint Representation Attention Network for Correspondence Learning</title><title>Pattern recognition</title><description>•We design a three layer deep learning framework for outlier rejection•We propose a novel Joint Representation Attention mechanism•We design an innovative weight function to improve the generalization ability•Experimental results show the proposed network is superior to state of the art networks. In this paper, we propose a Joint Representation Attention Network (JRA-Net), an end-to-end network, to establish reliable correspondences for image pairs. The initial correspondences generated by the local feature descriptor usually suffer from heavy outliers, which makes the network unable to learn a powerful enough representation for distinguishing inliers and outliers. To this end, we design a novel attention mechanism. The proposed attention mechanism not only takes into account the correlations between global context and geometric information, but also introduces the joint representation of different scales to suppress trivial correspondences and highlight crucial correspondences. In addition, to improve the generalization ability of attention mechanism, we present an innovative weight function, to effectively adjust the importance of the attention mechanism in a learning manner. Finally, by combining the above components, the proposed JRA-Net is able to effectively infer the probabilities of correspondences being inliers. Empirical experiments on challenging datasets demonstrate the effectiveness and generalization of JRA-Net. We achieve remarkable improvements compared with the current state-of-the-art approaches on outlier rejection and relative pose estimation.</description><subject>Attention mechanism</subject><subject>Correspondences</subject><subject>Joint representation</subject><subject>Outlier rejection</subject><subject>Pose estimation</subject><issn>0031-3203</issn><issn>1873-5142</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kNtKxDAQhoMouK6-gRd9ga6T9JDWC6EUT2tRWPQ6pOl0yapJSYLi25tSr72amX8OzP8RcklhQ4GWV4fNJIOy-w0DxqJU0wqOyIpWPEsLmrNjsgLIaJoxyE7JmfcHAMpjY0WetrsmfcZwnWytNiHZ4eTQowkyaGuSJoSYz1mc-bbuPRmtS1rr4tBkzYBGYdKhdEab_Tk5GeWHx4u_uCZvd7ev7UPavdw_tk2XKpaVIeWjQpAq77GgfVFizsuqh6HneayxlogFqrrKewYAkmeMq_hqVJhSnEYLa5Ivd5Wz3jscxeT0p3Q_goKYgYiDWICIGYhYgMS1m2UN429fGp3wSs8GBu1QBTFY_f-BX1gPa5w</recordid><startdate>202303</startdate><enddate>202303</enddate><creator>Shi, Ziwei</creator><creator>Xiao, Guobao</creator><creator>Zheng, Linxin</creator><creator>Ma, Jiayi</creator><creator>Chen, Riqing</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-2928-8100</orcidid></search><sort><creationdate>202303</creationdate><title>JRA-Net: Joint Representation Attention Network for Correspondence Learning</title><author>Shi, Ziwei ; Xiao, Guobao ; Zheng, Linxin ; Ma, Jiayi ; Chen, Riqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Attention mechanism</topic><topic>Correspondences</topic><topic>Joint representation</topic><topic>Outlier rejection</topic><topic>Pose estimation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shi, Ziwei</creatorcontrib><creatorcontrib>Xiao, Guobao</creatorcontrib><creatorcontrib>Zheng, Linxin</creatorcontrib><creatorcontrib>Ma, Jiayi</creatorcontrib><creatorcontrib>Chen, Riqing</creatorcontrib><collection>CrossRef</collection><jtitle>Pattern recognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shi, Ziwei</au><au>Xiao, Guobao</au><au>Zheng, Linxin</au><au>Ma, Jiayi</au><au>Chen, Riqing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>JRA-Net: Joint Representation Attention Network for Correspondence Learning</atitle><jtitle>Pattern recognition</jtitle><date>2023-03</date><risdate>2023</risdate><volume>135</volume><spage>109180</spage><pages>109180-</pages><artnum>109180</artnum><issn>0031-3203</issn><eissn>1873-5142</eissn><abstract>•We design a three layer deep learning framework for outlier rejection•We propose a novel Joint Representation Attention mechanism•We design an innovative weight function to improve the generalization ability•Experimental results show the proposed network is superior to state of the art networks. In this paper, we propose a Joint Representation Attention Network (JRA-Net), an end-to-end network, to establish reliable correspondences for image pairs. The initial correspondences generated by the local feature descriptor usually suffer from heavy outliers, which makes the network unable to learn a powerful enough representation for distinguishing inliers and outliers. To this end, we design a novel attention mechanism. The proposed attention mechanism not only takes into account the correlations between global context and geometric information, but also introduces the joint representation of different scales to suppress trivial correspondences and highlight crucial correspondences. In addition, to improve the generalization ability of attention mechanism, we present an innovative weight function, to effectively adjust the importance of the attention mechanism in a learning manner. Finally, by combining the above components, the proposed JRA-Net is able to effectively infer the probabilities of correspondences being inliers. Empirical experiments on challenging datasets demonstrate the effectiveness and generalization of JRA-Net. We achieve remarkable improvements compared with the current state-of-the-art approaches on outlier rejection and relative pose estimation.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.patcog.2022.109180</doi><orcidid>https://orcid.org/0000-0003-2928-8100</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0031-3203
ispartof Pattern recognition, 2023-03, Vol.135, p.109180, Article 109180
issn 0031-3203
1873-5142
language eng
recordid cdi_crossref_primary_10_1016_j_patcog_2022_109180
source Elsevier
subjects Attention mechanism
Correspondences
Joint representation
Outlier rejection
Pose estimation
title JRA-Net: Joint Representation Attention Network for Correspondence Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T14%3A26%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=JRA-Net:%20Joint%20Representation%20Attention%20Network%20for%20Correspondence%20Learning&rft.jtitle=Pattern%20recognition&rft.au=Shi,%20Ziwei&rft.date=2023-03&rft.volume=135&rft.spage=109180&rft.pages=109180-&rft.artnum=109180&rft.issn=0031-3203&rft.eissn=1873-5142&rft_id=info:doi/10.1016/j.patcog.2022.109180&rft_dat=%3Celsevier_cross%3ES0031320322006598%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c236t-7fce0ac4be51b56e4768b0db7451be9aee5ec984b2000a7327c142c982cc71203%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true