Loading…

RFLNet: Reverse Feature Learning Network for Salient Object Detection in Forward-Looking Sonar Images

Underwater sonar image salient object detection (SOD) plays a crucial role in underwater rescue and marine resource discovery. Although existing SOD methods have shown some potential in sonar images, they perform poorly in terms of object localization, clear boundary detection, and local refinement....

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2024, Vol.12, p.155437-155450
Main Authors: He, Fu-Lin, Wang, Zhen, Yuan, Shen-Ao, Zhang, Shan-Wen, Zhao, Zheng-Yang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Underwater sonar image salient object detection (SOD) plays a crucial role in underwater rescue and marine resource discovery. Although existing SOD methods have shown some potential in sonar images, they perform poorly in terms of object localization, clear boundary detection, and local refinement. To address these issues, we propose a Reverse Feature Learning Network (RFLNet), which consists of four modules: (a) Coarse Feature Extraction Module (CFEM), (b) Reverse Feature Localization Branch Module (RFLBM), (c) Residual Channel Spatial Parallel Attention Mechanism (RCSPA), and (d) Feature Refinement Module (FRM). First, we generate a coarse prediction map using the CFEM and extract object boundary and localization information with the RFLB module. Second, we use feedback signals from the backpropagation algorithm to guide the training of the CFEM. Additionally, we introduce the RCSPA to suppress the interference of low-level non-salient information. Finally, we use the FRM to further refine the local details of the coarse prediction feature map generated by the CFEM. To demonstrate the effectiveness of the proposed RFLNet, we conducted extensive experiments on the forward-looking sonar image dataset. Experimental results show that our method achieved an enhanced contrast measurement (E-measure) of 0.9894 and a boundary displacement error of 2.9126, outperforming 20 other state-of-the-art methods. The source code and models are available at https://github.com/darkseid-arch/RFLNet-FLSSOD .
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3465534