Loading…
Saliency-based dual-attention network for unsupervised video object segmentation
This paper solves the task of unsupervised video object segmentation (UVOS) that segments the objects of interest through the entire videos without any annotation. In recent years, many unsupervised video object segmentation (UVOS) methods have been proposed. Although these methods perform well, the...
Saved in:
Published in: | The Journal of supercomputing 2024-03, Vol.80 (4), p.4996-5010 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper solves the task of unsupervised video object segmentation (UVOS) that segments the objects of interest through the entire videos without any annotation. In recent years, many unsupervised video object segmentation (UVOS) methods have been proposed. Although these methods perform well, they rely on networks with heavy weights, often leading to large model size. In order to reduce the model size while keeping a competitive performance, we propose a saliency-based dual-attention (SDA) method for UVOS in this paper. In our method, we take optical flow and video frames as inputs and extract the appearance information and motion information from optical flow and video frames. We design a two-branch network with appearance information and motion information. The information from these two branches is fused via a saliency-based dual-attention module to segment the primary object in one path. The saliency-based dual-attention module is composed of saliency attention and saliency-based reverse attention. To demonstrate the effectiveness of our network, we tested it on the DAVIS-2016 and SegtrackV2 datasets. Experimental results demonstrate that our method can achieve competitive results in terms of accuracy and model size. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-023-05637-x |