Loading…

TCGNet: Type-Correlation Guidance for Salient Object Detection

Contrast and part-whole relations induced by deep neural networks like Convolutional Neural Networks (CNNs) and Capsule Networks (CapsNets) have been known as two types of semantic cues for deep salient object detection. However, few works pay attention to their complementary properties in the conte...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on intelligent transportation systems 2024-07, Vol.25 (7), p.6633-6644
Main Authors: Liu, Yi, Zhou, Ling, Wu, Gengshen, Xu, Shoukun, Han, Jungong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Contrast and part-whole relations induced by deep neural networks like Convolutional Neural Networks (CNNs) and Capsule Networks (CapsNets) have been known as two types of semantic cues for deep salient object detection. However, few works pay attention to their complementary properties in the context of saliency prediction. In this paper, we probe into this issue and propose a Type-Correlation Guidance Network (TCGNet) for salient object detection. Specifically, a Multi-Type Cue Correlation (MTCC) covering CNNs and CapsNets is designed to extract the contrast and part-whole relational semantics, respectively. Using MTCC, two correlation matrices containing complementary information are computed with these two types of semantics. In return, these correlation matrices are used to guide the learning of the above semantics to generate better saliency cues. Besides, a Type Interaction Attention (TIA) is developed to interact semantics from CNNs and CapsNets for the aim of saliency prediction. Experiments and analysis on five benchmarks show the superiority of the proposed approach. Codes has been released on https://github.com/liuyi1989/TCGNet .
ISSN:1524-9050
1558-0016
DOI:10.1109/TITS.2023.3342811