Loading…
A Co-Saliency Object Detection Model for Video Sequences
Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of the...
Saved in:
Published in: | International journal of performability engineering 2020-11, Vol.16 (11), p.1793 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of these, we propose a novel salient object detection model for video sequences, which combines the dynamic saliency and the static saliency into a co-saliency map. First, the salient degree of the general objects in each frame was estimated by the motion-independent algorithm, and the global static saliency map was generated based on the results. Next, the dynamic regions were detected by an improved motion-based approach, and the dynamic saliency map was computed with a local saliency detection method according to the related dynamic regions and the visual fixation map. Finally, a novel co-saliency algorithm was devised to fuse the static and dynamic maps. The final hierarchical co-saliency map reflects the saliency of both dynamic and static objects, and it satisfies the demand of more advanced tasks. Through the evaluation on two existing datasets, it is proven that the proposed model can achieve state-of-the-art performance. |
---|---|
ISSN: | 0973-1318 |
DOI: | 10.23940/ijpe.20.11.p11.17931802 |