Loading…

Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features

Due to the complementary nature of optical and SAR images, their alignment is of increasing interest. However, due to the significant radiometric differences between them, precise matching becomes a very challenging problem. Although current advanced structural features and deep learning-based metho...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on geoscience and remote sensing 2024-01, Vol.62, p.1-1
Main Authors: Ye, Yuanxin, Yang, Chao, Gong, Guoqing, Yang, Peizhen, Quan, Dou, Li, Jiayuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093
cites cdi_FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093
container_end_page 1
container_issue
container_start_page 1
container_title IEEE transactions on geoscience and remote sensing
container_volume 62
creator Ye, Yuanxin
Yang, Chao
Gong, Guoqing
Yang, Peizhen
Quan, Dou
Li, Jiayuan
description Due to the complementary nature of optical and SAR images, their alignment is of increasing interest. However, due to the significant radiometric differences between them, precise matching becomes a very challenging problem. Although current advanced structural features and deep learning-based methods have proposed feasible solutions, there is still much potential for improvement. In this paper, we propose a hybrid matching method using attention-enhanced structural features (namely AESF), which combines the advantages of both handcrafted-based and learning-based methods to improve the accuracy of optical and SAR image matching. It mainly consists of two modules: a novel effective multi-branch global attention (MBGA) module and a joint multi-cropping image matching loss function (MCTM) module. The MBGA module is designed to focus on shared information in structural feature descriptors of heterogeneous images across space and channel dimensions, significantly improving the expressive capacity of the classical structural features and generating more refined and robust image features. The MCTM module is constructed to fully exploit the association between global and local information of the input image, which can optimize the triple loss discriminator to discriminate positive and negative samples. To validate the effectiveness of the proposed method, it is compared with five state-of-the-art matching methods by using various optical and SAR datasets. The experimental results show that the matching accuracy at the 1-pixel threshold is improved by about 1.8%-8.7% compared with the most advanced deep learning method (OSMNet) and 6.5%-23% compared with the handcrafted description method (CFOG).
doi_str_mv 10.1109/TGRS.2024.3366247
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2930957161</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10436689</ieee_id><sourcerecordid>2930957161</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093</originalsourceid><addsrcrecordid>eNpNkE1rwkAQhpfSQq3tDyj0EOg5dr-TPYqoFSxC1POy2Uw0ohu7uzn03zfBHnqZGZjnnYEHoVeCJ4Rg9bFbFtsJxZRPGJOS8uwOjYgQeYol5_dohImSKc0VfURPIZwwJlyQbISKoi27EJPNNTbWnBPjqmQ7LZLVxRwg-TLRHht3SPZhqNMYwcWmdencHY2z0LPRdzZ2vo8uwPQDhGf0UJtzgJe_Pkb7xXw3-0zXm-VqNl2nlioeU5uZypayYsQYyDOBgYNkRoJilRCKQ6nyUtaQMypwLcq6xLWSpcQ4z2qLFRuj99vdq2-_OwhRn9rOu_6lpophJTIiSU-RG2V9G4KHWl99czH-RxOsB3V6UKcHdfpPXZ95u2UaAPjH836fK_YLbTRqmw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2930957161</pqid></control><display><type>article</type><title>Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Ye, Yuanxin ; Yang, Chao ; Gong, Guoqing ; Yang, Peizhen ; Quan, Dou ; Li, Jiayuan</creator><creatorcontrib>Ye, Yuanxin ; Yang, Chao ; Gong, Guoqing ; Yang, Peizhen ; Quan, Dou ; Li, Jiayuan</creatorcontrib><description>Due to the complementary nature of optical and SAR images, their alignment is of increasing interest. However, due to the significant radiometric differences between them, precise matching becomes a very challenging problem. Although current advanced structural features and deep learning-based methods have proposed feasible solutions, there is still much potential for improvement. In this paper, we propose a hybrid matching method using attention-enhanced structural features (namely AESF), which combines the advantages of both handcrafted-based and learning-based methods to improve the accuracy of optical and SAR image matching. It mainly consists of two modules: a novel effective multi-branch global attention (MBGA) module and a joint multi-cropping image matching loss function (MCTM) module. The MBGA module is designed to focus on shared information in structural feature descriptors of heterogeneous images across space and channel dimensions, significantly improving the expressive capacity of the classical structural features and generating more refined and robust image features. The MCTM module is constructed to fully exploit the association between global and local information of the input image, which can optimize the triple loss discriminator to discriminate positive and negative samples. To validate the effectiveness of the proposed method, it is compared with five state-of-the-art matching methods by using various optical and SAR datasets. The experimental results show that the matching accuracy at the 1-pixel threshold is improved by about 1.8%-8.7% compared with the most advanced deep learning method (OSMNet) and 6.5%-23% compared with the handcrafted description method (CFOG).</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2024.3366247</identifier><identifier>CODEN: IGRSD2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accuracy ; Attention mechanism ; Deep learning ; Image enhancement ; Image matching ; Matching ; Modules ; Optical and SAR images ; Radar imaging ; Robustness ; SAR (radar) ; Structural features ; Synthetic aperture radar</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2024-01, Vol.62, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093</citedby><cites>FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093</cites><orcidid>0000-0001-6843-6722 ; 0000-0002-9850-1668 ; 0009-0009-7637-7254 ; 0009-0002-7689-6950</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10436689$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Ye, Yuanxin</creatorcontrib><creatorcontrib>Yang, Chao</creatorcontrib><creatorcontrib>Gong, Guoqing</creatorcontrib><creatorcontrib>Yang, Peizhen</creatorcontrib><creatorcontrib>Quan, Dou</creatorcontrib><creatorcontrib>Li, Jiayuan</creatorcontrib><title>Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features</title><title>IEEE transactions on geoscience and remote sensing</title><addtitle>TGRS</addtitle><description>Due to the complementary nature of optical and SAR images, their alignment is of increasing interest. However, due to the significant radiometric differences between them, precise matching becomes a very challenging problem. Although current advanced structural features and deep learning-based methods have proposed feasible solutions, there is still much potential for improvement. In this paper, we propose a hybrid matching method using attention-enhanced structural features (namely AESF), which combines the advantages of both handcrafted-based and learning-based methods to improve the accuracy of optical and SAR image matching. It mainly consists of two modules: a novel effective multi-branch global attention (MBGA) module and a joint multi-cropping image matching loss function (MCTM) module. The MBGA module is designed to focus on shared information in structural feature descriptors of heterogeneous images across space and channel dimensions, significantly improving the expressive capacity of the classical structural features and generating more refined and robust image features. The MCTM module is constructed to fully exploit the association between global and local information of the input image, which can optimize the triple loss discriminator to discriminate positive and negative samples. To validate the effectiveness of the proposed method, it is compared with five state-of-the-art matching methods by using various optical and SAR datasets. The experimental results show that the matching accuracy at the 1-pixel threshold is improved by about 1.8%-8.7% compared with the most advanced deep learning method (OSMNet) and 6.5%-23% compared with the handcrafted description method (CFOG).</description><subject>Accuracy</subject><subject>Attention mechanism</subject><subject>Deep learning</subject><subject>Image enhancement</subject><subject>Image matching</subject><subject>Matching</subject><subject>Modules</subject><subject>Optical and SAR images</subject><subject>Radar imaging</subject><subject>Robustness</subject><subject>SAR (radar)</subject><subject>Structural features</subject><subject>Synthetic aperture radar</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkE1rwkAQhpfSQq3tDyj0EOg5dr-TPYqoFSxC1POy2Uw0ohu7uzn03zfBHnqZGZjnnYEHoVeCJ4Rg9bFbFtsJxZRPGJOS8uwOjYgQeYol5_dohImSKc0VfURPIZwwJlyQbISKoi27EJPNNTbWnBPjqmQ7LZLVxRwg-TLRHht3SPZhqNMYwcWmdencHY2z0LPRdzZ2vo8uwPQDhGf0UJtzgJe_Pkb7xXw3-0zXm-VqNl2nlioeU5uZypayYsQYyDOBgYNkRoJilRCKQ6nyUtaQMypwLcq6xLWSpcQ4z2qLFRuj99vdq2-_OwhRn9rOu_6lpophJTIiSU-RG2V9G4KHWl99czH-RxOsB3V6UKcHdfpPXZ95u2UaAPjH836fK_YLbTRqmw</recordid><startdate>20240101</startdate><enddate>20240101</enddate><creator>Ye, Yuanxin</creator><creator>Yang, Chao</creator><creator>Gong, Guoqing</creator><creator>Yang, Peizhen</creator><creator>Quan, Dou</creator><creator>Li, Jiayuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-6843-6722</orcidid><orcidid>https://orcid.org/0000-0002-9850-1668</orcidid><orcidid>https://orcid.org/0009-0009-7637-7254</orcidid><orcidid>https://orcid.org/0009-0002-7689-6950</orcidid></search><sort><creationdate>20240101</creationdate><title>Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features</title><author>Ye, Yuanxin ; Yang, Chao ; Gong, Guoqing ; Yang, Peizhen ; Quan, Dou ; Li, Jiayuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Attention mechanism</topic><topic>Deep learning</topic><topic>Image enhancement</topic><topic>Image matching</topic><topic>Matching</topic><topic>Modules</topic><topic>Optical and SAR images</topic><topic>Radar imaging</topic><topic>Robustness</topic><topic>SAR (radar)</topic><topic>Structural features</topic><topic>Synthetic aperture radar</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ye, Yuanxin</creatorcontrib><creatorcontrib>Yang, Chao</creatorcontrib><creatorcontrib>Gong, Guoqing</creatorcontrib><creatorcontrib>Yang, Peizhen</creatorcontrib><creatorcontrib>Quan, Dou</creatorcontrib><creatorcontrib>Li, Jiayuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ye, Yuanxin</au><au>Yang, Chao</au><au>Gong, Guoqing</au><au>Yang, Peizhen</au><au>Quan, Dou</au><au>Li, Jiayuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><stitle>TGRS</stitle><date>2024-01-01</date><risdate>2024</risdate><volume>62</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><coden>IGRSD2</coden><abstract>Due to the complementary nature of optical and SAR images, their alignment is of increasing interest. However, due to the significant radiometric differences between them, precise matching becomes a very challenging problem. Although current advanced structural features and deep learning-based methods have proposed feasible solutions, there is still much potential for improvement. In this paper, we propose a hybrid matching method using attention-enhanced structural features (namely AESF), which combines the advantages of both handcrafted-based and learning-based methods to improve the accuracy of optical and SAR image matching. It mainly consists of two modules: a novel effective multi-branch global attention (MBGA) module and a joint multi-cropping image matching loss function (MCTM) module. The MBGA module is designed to focus on shared information in structural feature descriptors of heterogeneous images across space and channel dimensions, significantly improving the expressive capacity of the classical structural features and generating more refined and robust image features. The MCTM module is constructed to fully exploit the association between global and local information of the input image, which can optimize the triple loss discriminator to discriminate positive and negative samples. To validate the effectiveness of the proposed method, it is compared with five state-of-the-art matching methods by using various optical and SAR datasets. The experimental results show that the matching accuracy at the 1-pixel threshold is improved by about 1.8%-8.7% compared with the most advanced deep learning method (OSMNet) and 6.5%-23% compared with the handcrafted description method (CFOG).</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TGRS.2024.3366247</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-6843-6722</orcidid><orcidid>https://orcid.org/0000-0002-9850-1668</orcidid><orcidid>https://orcid.org/0009-0009-7637-7254</orcidid><orcidid>https://orcid.org/0009-0002-7689-6950</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0196-2892
ispartof IEEE transactions on geoscience and remote sensing, 2024-01, Vol.62, p.1-1
issn 0196-2892
1558-0644
language eng
recordid cdi_proquest_journals_2930957161
source IEEE Electronic Library (IEL) Journals
subjects Accuracy
Attention mechanism
Deep learning
Image enhancement
Image matching
Matching
Modules
Optical and SAR images
Radar imaging
Robustness
SAR (radar)
Structural features
Synthetic aperture radar
title Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A33%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Optical%20and%20SAR%20Image%20Matching%20Using%20Attention-Enhanced%20Structural%20Features&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Ye,%20Yuanxin&rft.date=2024-01-01&rft.volume=62&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=0196-2892&rft.eissn=1558-0644&rft.coden=IGRSD2&rft_id=info:doi/10.1109/TGRS.2024.3366247&rft_dat=%3Cproquest_cross%3E2930957161%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c294t-c7adcb6d31aae8750e4e63a6e93d5594eb98b6fe83250f5bfb0f96b60087fc093%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2930957161&rft_id=info:pmid/&rft_ieee_id=10436689&rfr_iscdi=true