Loading…
DFNet: Discriminative feature extraction and integration network for salient object detection
Despite the powerful feature extraction capability of Convolutional Neural Networks, there are still some challenges in saliency detection. In this paper, we focus on two aspects of challenges: i) Since salient objects appear in various sizes, using single-scale convolution would not capture the rig...
Saved in:
Published in: | Engineering applications of artificial intelligence 2020-03, Vol.89, p.103419, Article 103419 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53 |
---|---|
cites | cdi_FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53 |
container_end_page | |
container_issue | |
container_start_page | 103419 |
container_title | Engineering applications of artificial intelligence |
container_volume | 89 |
creator | Noori, Mehrdad Mohammadi, Sina Majelan, Sina Ghofrani Bahri, Ali Havaei, Mohammad |
description | Despite the powerful feature extraction capability of Convolutional Neural Networks, there are still some challenges in saliency detection. In this paper, we focus on two aspects of challenges: i) Since salient objects appear in various sizes, using single-scale convolution would not capture the right size. Moreover, using multi-scale convolutions without considering their importance may confuse the model. ii) Employing multi-level features helps the model use both local and global context. However, treating all features equally results in information redundancy. Therefore, there needs to be a mechanism to intelligently select which features in different levels are useful. To address the first challenge, we propose a Multi-scale Attention Guided Module. This module not only extracts multi-scale features effectively but also gives more attention to more discriminative feature maps corresponding to the scale of the salient object. To address the second challenge, we propose an Attention-based Multi-level Integrator Module to give the model the ability to assign different weights to multi-level feature maps. Furthermore, our Sharpening Loss function guides our network to output saliency maps with higher certainty and less blurry salient objects, and it has far better performance than the Cross-entropy loss. For the first time, we adopt four different backbones to show the generalization of our method. Experiments on five challenging datasets prove that our method achieves the state-of-the-art performance. Our approach is fast as well and can run at a real-time speed.
•A novel architecture for salient object detection is introduced.•A module is proposed to capture the right size for the salient object.•A module is proposed to intelligently weight feature maps of different levels.•Our loss function leads to sharper salient objects compared to the Cross-entropy.•Our method achieves the state-of-the-art performance on five challenging datasets. |
doi_str_mv | 10.1016/j.engappai.2019.103419 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_engappai_2019_103419</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0952197619303252</els_id><sourcerecordid>S0952197619303252</sourcerecordid><originalsourceid>FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53</originalsourceid><addsrcrecordid>eNqFkMtOwzAURC0EEqXwC8g_kOJH4gcrUEsBqYINLJHlONeVQ0kq2xT696QtrFldzUgzunMQuqRkQgkVV-0EuqVdr22YMEL1YPKS6iM0okryQkihj9GI6IoVVEtxis5SagkhXJVihN5m8yfI13gWkovhI3Q2hw1gDzZ_RsDwnaN1OfQdtl2DQ5dhGe1ed5C_-viOfR9xsqsAXcZ93YLLuIEM-9A5OvF2leDi947R6_zuZfpQLJ7vH6e3i8JxynJRCu-FVrJytWfSW2GFHDxVaSlrpgSTnJeV0lzxmkoPNW3KxjJPaaWcrys-RuLQ62KfUgRv1sMYG7eGErODZFrzB8nsIJkDpCF4cwjC8N0mQDTJDUscNCEOE0zTh_8qfgAvzXUw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>DFNet: Discriminative feature extraction and integration network for salient object detection</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Noori, Mehrdad ; Mohammadi, Sina ; Majelan, Sina Ghofrani ; Bahri, Ali ; Havaei, Mohammad</creator><creatorcontrib>Noori, Mehrdad ; Mohammadi, Sina ; Majelan, Sina Ghofrani ; Bahri, Ali ; Havaei, Mohammad</creatorcontrib><description>Despite the powerful feature extraction capability of Convolutional Neural Networks, there are still some challenges in saliency detection. In this paper, we focus on two aspects of challenges: i) Since salient objects appear in various sizes, using single-scale convolution would not capture the right size. Moreover, using multi-scale convolutions without considering their importance may confuse the model. ii) Employing multi-level features helps the model use both local and global context. However, treating all features equally results in information redundancy. Therefore, there needs to be a mechanism to intelligently select which features in different levels are useful. To address the first challenge, we propose a Multi-scale Attention Guided Module. This module not only extracts multi-scale features effectively but also gives more attention to more discriminative feature maps corresponding to the scale of the salient object. To address the second challenge, we propose an Attention-based Multi-level Integrator Module to give the model the ability to assign different weights to multi-level feature maps. Furthermore, our Sharpening Loss function guides our network to output saliency maps with higher certainty and less blurry salient objects, and it has far better performance than the Cross-entropy loss. For the first time, we adopt four different backbones to show the generalization of our method. Experiments on five challenging datasets prove that our method achieves the state-of-the-art performance. Our approach is fast as well and can run at a real-time speed.
•A novel architecture for salient object detection is introduced.•A module is proposed to capture the right size for the salient object.•A module is proposed to intelligently weight feature maps of different levels.•Our loss function leads to sharper salient objects compared to the Cross-entropy.•Our method achieves the state-of-the-art performance on five challenging datasets.</description><identifier>ISSN: 0952-1976</identifier><identifier>EISSN: 1873-6769</identifier><identifier>DOI: 10.1016/j.engappai.2019.103419</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Attention guidance ; Deep convolutional neural networks ; Fully convolutional neural networks ; Salient object detection</subject><ispartof>Engineering applications of artificial intelligence, 2020-03, Vol.89, p.103419, Article 103419</ispartof><rights>2019 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53</citedby><cites>FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Noori, Mehrdad</creatorcontrib><creatorcontrib>Mohammadi, Sina</creatorcontrib><creatorcontrib>Majelan, Sina Ghofrani</creatorcontrib><creatorcontrib>Bahri, Ali</creatorcontrib><creatorcontrib>Havaei, Mohammad</creatorcontrib><title>DFNet: Discriminative feature extraction and integration network for salient object detection</title><title>Engineering applications of artificial intelligence</title><description>Despite the powerful feature extraction capability of Convolutional Neural Networks, there are still some challenges in saliency detection. In this paper, we focus on two aspects of challenges: i) Since salient objects appear in various sizes, using single-scale convolution would not capture the right size. Moreover, using multi-scale convolutions without considering their importance may confuse the model. ii) Employing multi-level features helps the model use both local and global context. However, treating all features equally results in information redundancy. Therefore, there needs to be a mechanism to intelligently select which features in different levels are useful. To address the first challenge, we propose a Multi-scale Attention Guided Module. This module not only extracts multi-scale features effectively but also gives more attention to more discriminative feature maps corresponding to the scale of the salient object. To address the second challenge, we propose an Attention-based Multi-level Integrator Module to give the model the ability to assign different weights to multi-level feature maps. Furthermore, our Sharpening Loss function guides our network to output saliency maps with higher certainty and less blurry salient objects, and it has far better performance than the Cross-entropy loss. For the first time, we adopt four different backbones to show the generalization of our method. Experiments on five challenging datasets prove that our method achieves the state-of-the-art performance. Our approach is fast as well and can run at a real-time speed.
•A novel architecture for salient object detection is introduced.•A module is proposed to capture the right size for the salient object.•A module is proposed to intelligently weight feature maps of different levels.•Our loss function leads to sharper salient objects compared to the Cross-entropy.•Our method achieves the state-of-the-art performance on five challenging datasets.</description><subject>Attention guidance</subject><subject>Deep convolutional neural networks</subject><subject>Fully convolutional neural networks</subject><subject>Salient object detection</subject><issn>0952-1976</issn><issn>1873-6769</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNqFkMtOwzAURC0EEqXwC8g_kOJH4gcrUEsBqYINLJHlONeVQ0kq2xT696QtrFldzUgzunMQuqRkQgkVV-0EuqVdr22YMEL1YPKS6iM0okryQkihj9GI6IoVVEtxis5SagkhXJVihN5m8yfI13gWkovhI3Q2hw1gDzZ_RsDwnaN1OfQdtl2DQ5dhGe1ed5C_-viOfR9xsqsAXcZ93YLLuIEM-9A5OvF2leDi947R6_zuZfpQLJ7vH6e3i8JxynJRCu-FVrJytWfSW2GFHDxVaSlrpgSTnJeV0lzxmkoPNW3KxjJPaaWcrys-RuLQ62KfUgRv1sMYG7eGErODZFrzB8nsIJkDpCF4cwjC8N0mQDTJDUscNCEOE0zTh_8qfgAvzXUw</recordid><startdate>202003</startdate><enddate>202003</enddate><creator>Noori, Mehrdad</creator><creator>Mohammadi, Sina</creator><creator>Majelan, Sina Ghofrani</creator><creator>Bahri, Ali</creator><creator>Havaei, Mohammad</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202003</creationdate><title>DFNet: Discriminative feature extraction and integration network for salient object detection</title><author>Noori, Mehrdad ; Mohammadi, Sina ; Majelan, Sina Ghofrani ; Bahri, Ali ; Havaei, Mohammad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Attention guidance</topic><topic>Deep convolutional neural networks</topic><topic>Fully convolutional neural networks</topic><topic>Salient object detection</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Noori, Mehrdad</creatorcontrib><creatorcontrib>Mohammadi, Sina</creatorcontrib><creatorcontrib>Majelan, Sina Ghofrani</creatorcontrib><creatorcontrib>Bahri, Ali</creatorcontrib><creatorcontrib>Havaei, Mohammad</creatorcontrib><collection>CrossRef</collection><jtitle>Engineering applications of artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Noori, Mehrdad</au><au>Mohammadi, Sina</au><au>Majelan, Sina Ghofrani</au><au>Bahri, Ali</au><au>Havaei, Mohammad</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DFNet: Discriminative feature extraction and integration network for salient object detection</atitle><jtitle>Engineering applications of artificial intelligence</jtitle><date>2020-03</date><risdate>2020</risdate><volume>89</volume><spage>103419</spage><pages>103419-</pages><artnum>103419</artnum><issn>0952-1976</issn><eissn>1873-6769</eissn><abstract>Despite the powerful feature extraction capability of Convolutional Neural Networks, there are still some challenges in saliency detection. In this paper, we focus on two aspects of challenges: i) Since salient objects appear in various sizes, using single-scale convolution would not capture the right size. Moreover, using multi-scale convolutions without considering their importance may confuse the model. ii) Employing multi-level features helps the model use both local and global context. However, treating all features equally results in information redundancy. Therefore, there needs to be a mechanism to intelligently select which features in different levels are useful. To address the first challenge, we propose a Multi-scale Attention Guided Module. This module not only extracts multi-scale features effectively but also gives more attention to more discriminative feature maps corresponding to the scale of the salient object. To address the second challenge, we propose an Attention-based Multi-level Integrator Module to give the model the ability to assign different weights to multi-level feature maps. Furthermore, our Sharpening Loss function guides our network to output saliency maps with higher certainty and less blurry salient objects, and it has far better performance than the Cross-entropy loss. For the first time, we adopt four different backbones to show the generalization of our method. Experiments on five challenging datasets prove that our method achieves the state-of-the-art performance. Our approach is fast as well and can run at a real-time speed.
•A novel architecture for salient object detection is introduced.•A module is proposed to capture the right size for the salient object.•A module is proposed to intelligently weight feature maps of different levels.•Our loss function leads to sharper salient objects compared to the Cross-entropy.•Our method achieves the state-of-the-art performance on five challenging datasets.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.engappai.2019.103419</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0952-1976 |
ispartof | Engineering applications of artificial intelligence, 2020-03, Vol.89, p.103419, Article 103419 |
issn | 0952-1976 1873-6769 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_engappai_2019_103419 |
source | ScienceDirect Freedom Collection 2022-2024 |
subjects | Attention guidance Deep convolutional neural networks Fully convolutional neural networks Salient object detection |
title | DFNet: Discriminative feature extraction and integration network for salient object detection |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T23%3A04%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DFNet:%20Discriminative%20feature%20extraction%20and%20integration%20network%20for%20salient%20object%20detection&rft.jtitle=Engineering%20applications%20of%20artificial%20intelligence&rft.au=Noori,%20Mehrdad&rft.date=2020-03&rft.volume=89&rft.spage=103419&rft.pages=103419-&rft.artnum=103419&rft.issn=0952-1976&rft.eissn=1873-6769&rft_id=info:doi/10.1016/j.engappai.2019.103419&rft_dat=%3Celsevier_cross%3ES0952197619303252%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c312t-46ff69875cbf27fa6a6746f85977b28627334589383b17feb1d4da2f1158cfb53%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |