Loading…
Global contextual guided residual attention network for salient object detection
High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same fea...
Saved in:
Published in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-04, Vol.52 (6), p.6208-6226 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103 |
---|---|
cites | cdi_FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103 |
container_end_page | 6226 |
container_issue | 6 |
container_start_page | 6208 |
container_title | Applied intelligence (Dordrecht, Netherlands) |
container_volume | 52 |
creator | Wang, Jun Zhao, Zhengyun Yang, Shangqin Chai, Xiuli Zhang, Wanjun Zhang, Miaohui |
description | High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same feature are not of equal importance to saliency detection. In this paper, we propose a residual attention learning strategy and a multistage refinement mechanism to gradually refine the coarse prediction in a scale-by-scale manner. First, a global information complementary (GIC) module is designed by integrating low-level detailed features and high-level semantic features. Second, to extract multiscale features of the same layer, a multiscale parallel convolutional (MPC) module is employed. Afterwards, we present a residual attention mechanism module (RAM) to receive the feature maps of adjacent stages, which are from the hybrid feature cascaded aggregation (HFCA) module. The HFCA aims to enhance feature maps, which reduce the loss of spatial details and the impact of varying the shape, scale and position of the object. Finally, we adopt multiscale cross-entropy loss to guide network learning salient features. Experimental results on six benchmark datasets demonstrate that the proposed method significantly outperforms 15 state-of-the-art methods under various evaluation metrics. |
doi_str_mv | 10.1007/s10489-021-02713-8 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2649845107</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2649845107</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103</originalsourceid><addsrcrecordid>eNp9UE1LxDAQDaLguvoHPAU8VycfbZKjLLoKC3pQ8BbSJl1aa7MmKeq_N2sFbx6GNzO892Z4CJ0TuCQA4ioS4FIVQEkuQVghD9CClIIVgitxiBagKC-qSr0co5MYewBgDMgCPa4HX5sBN35M7jNNud1OnXUWBxc7u59NSm5MnR_x6NKHD6-49QFHM3R5jX3duyZh61KGTDpFR60Zojv7xSV6vr15Wt0Vm4f1_ep6UzSUq1RQS5yUjFTW0LYFYa2RtaGi5AxKphSnFlz-XYoalCJClZUTVnJWWlFxAmyJLmbfXfDvk4tJ934KYz6paZV1vCQgMovOrCb4GINr9S50byZ8aQJ6n5yek9M5Of2TnJZZxGZRzORx68Kf9T-qb3uvcL4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2649845107</pqid></control><display><type>article</type><title>Global contextual guided residual attention network for salient object detection</title><source>ABI/INFORM Global</source><source>Springer Link</source><creator>Wang, Jun ; Zhao, Zhengyun ; Yang, Shangqin ; Chai, Xiuli ; Zhang, Wanjun ; Zhang, Miaohui</creator><creatorcontrib>Wang, Jun ; Zhao, Zhengyun ; Yang, Shangqin ; Chai, Xiuli ; Zhang, Wanjun ; Zhang, Miaohui</creatorcontrib><description>High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same feature are not of equal importance to saliency detection. In this paper, we propose a residual attention learning strategy and a multistage refinement mechanism to gradually refine the coarse prediction in a scale-by-scale manner. First, a global information complementary (GIC) module is designed by integrating low-level detailed features and high-level semantic features. Second, to extract multiscale features of the same layer, a multiscale parallel convolutional (MPC) module is employed. Afterwards, we present a residual attention mechanism module (RAM) to receive the feature maps of adjacent stages, which are from the hybrid feature cascaded aggregation (HFCA) module. The HFCA aims to enhance feature maps, which reduce the loss of spatial details and the impact of varying the shape, scale and position of the object. Finally, we adopt multiscale cross-entropy loss to guide network learning salient features. Experimental results on six benchmark datasets demonstrate that the proposed method significantly outperforms 15 state-of-the-art methods under various evaluation metrics.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-021-02713-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Computer Science ; Feature extraction ; Feature maps ; Learning ; Machines ; Manufacturing ; Mechanical Engineering ; Modules ; Object recognition ; Processes ; Salience ; Semantics</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2022-04, Vol.52 (6), p.6208-6226</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103</citedby><cites>FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103</cites><orcidid>0000-0001-5146-6781</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2649845107/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2649845107?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,11688,27924,27925,36060,44363,74895</link.rule.ids></links><search><creatorcontrib>Wang, Jun</creatorcontrib><creatorcontrib>Zhao, Zhengyun</creatorcontrib><creatorcontrib>Yang, Shangqin</creatorcontrib><creatorcontrib>Chai, Xiuli</creatorcontrib><creatorcontrib>Zhang, Wanjun</creatorcontrib><creatorcontrib>Zhang, Miaohui</creatorcontrib><title>Global contextual guided residual attention network for salient object detection</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same feature are not of equal importance to saliency detection. In this paper, we propose a residual attention learning strategy and a multistage refinement mechanism to gradually refine the coarse prediction in a scale-by-scale manner. First, a global information complementary (GIC) module is designed by integrating low-level detailed features and high-level semantic features. Second, to extract multiscale features of the same layer, a multiscale parallel convolutional (MPC) module is employed. Afterwards, we present a residual attention mechanism module (RAM) to receive the feature maps of adjacent stages, which are from the hybrid feature cascaded aggregation (HFCA) module. The HFCA aims to enhance feature maps, which reduce the loss of spatial details and the impact of varying the shape, scale and position of the object. Finally, we adopt multiscale cross-entropy loss to guide network learning salient features. Experimental results on six benchmark datasets demonstrate that the proposed method significantly outperforms 15 state-of-the-art methods under various evaluation metrics.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Science</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Learning</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Modules</subject><subject>Object recognition</subject><subject>Processes</subject><subject>Salience</subject><subject>Semantics</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9UE1LxDAQDaLguvoHPAU8VycfbZKjLLoKC3pQ8BbSJl1aa7MmKeq_N2sFbx6GNzO892Z4CJ0TuCQA4ioS4FIVQEkuQVghD9CClIIVgitxiBagKC-qSr0co5MYewBgDMgCPa4HX5sBN35M7jNNud1OnXUWBxc7u59NSm5MnR_x6NKHD6-49QFHM3R5jX3duyZh61KGTDpFR60Zojv7xSV6vr15Wt0Vm4f1_ep6UzSUq1RQS5yUjFTW0LYFYa2RtaGi5AxKphSnFlz-XYoalCJClZUTVnJWWlFxAmyJLmbfXfDvk4tJ934KYz6paZV1vCQgMovOrCb4GINr9S50byZ8aQJ6n5yek9M5Of2TnJZZxGZRzORx68Kf9T-qb3uvcL4</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Wang, Jun</creator><creator>Zhao, Zhengyun</creator><creator>Yang, Shangqin</creator><creator>Chai, Xiuli</creator><creator>Zhang, Wanjun</creator><creator>Zhang, Miaohui</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-5146-6781</orcidid></search><sort><creationdate>20220401</creationdate><title>Global contextual guided residual attention network for salient object detection</title><author>Wang, Jun ; Zhao, Zhengyun ; Yang, Shangqin ; Chai, Xiuli ; Zhang, Wanjun ; Zhang, Miaohui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Science</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Learning</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Modules</topic><topic>Object recognition</topic><topic>Processes</topic><topic>Salience</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Jun</creatorcontrib><creatorcontrib>Zhao, Zhengyun</creatorcontrib><creatorcontrib>Yang, Shangqin</creatorcontrib><creatorcontrib>Chai, Xiuli</creatorcontrib><creatorcontrib>Zhang, Wanjun</creatorcontrib><creatorcontrib>Zhang, Miaohui</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer science database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Jun</au><au>Zhao, Zhengyun</au><au>Yang, Shangqin</au><au>Chai, Xiuli</au><au>Zhang, Wanjun</au><au>Zhang, Miaohui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Global contextual guided residual attention network for salient object detection</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2022-04-01</date><risdate>2022</risdate><volume>52</volume><issue>6</issue><spage>6208</spage><epage>6226</epage><pages>6208-6226</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same feature are not of equal importance to saliency detection. In this paper, we propose a residual attention learning strategy and a multistage refinement mechanism to gradually refine the coarse prediction in a scale-by-scale manner. First, a global information complementary (GIC) module is designed by integrating low-level detailed features and high-level semantic features. Second, to extract multiscale features of the same layer, a multiscale parallel convolutional (MPC) module is employed. Afterwards, we present a residual attention mechanism module (RAM) to receive the feature maps of adjacent stages, which are from the hybrid feature cascaded aggregation (HFCA) module. The HFCA aims to enhance feature maps, which reduce the loss of spatial details and the impact of varying the shape, scale and position of the object. Finally, we adopt multiscale cross-entropy loss to guide network learning salient features. Experimental results on six benchmark datasets demonstrate that the proposed method significantly outperforms 15 state-of-the-art methods under various evaluation metrics.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-021-02713-8</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0001-5146-6781</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0924-669X |
ispartof | Applied intelligence (Dordrecht, Netherlands), 2022-04, Vol.52 (6), p.6208-6226 |
issn | 0924-669X 1573-7497 |
language | eng |
recordid | cdi_proquest_journals_2649845107 |
source | ABI/INFORM Global; Springer Link |
subjects | Artificial Intelligence Artificial neural networks Computer Science Feature extraction Feature maps Learning Machines Manufacturing Mechanical Engineering Modules Object recognition Processes Salience Semantics |
title | Global contextual guided residual attention network for salient object detection |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T20%3A40%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Global%20contextual%20guided%20residual%20attention%20network%20for%20salient%20object%20detection&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Wang,%20Jun&rft.date=2022-04-01&rft.volume=52&rft.issue=6&rft.spage=6208&rft.epage=6226&rft.pages=6208-6226&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-021-02713-8&rft_dat=%3Cproquest_cross%3E2649845107%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c249t-2d1e88316da2ff07dda8ba275430539942d0e74987b09917956e7d8435d764103%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2649845107&rft_id=info:pmid/&rfr_iscdi=true |