Loading…

A Co-Saliency Object Detection Model for Video Sequences

Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of the...

Full description

Saved in:
Bibliographic Details
Published in:International journal of performability engineering 2020-11, Vol.16 (11), p.1793
Main Authors: Tao, Wei, Xuezhuan, Zhao, Lishen, Pei, Lingling, Li
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue 11
container_start_page 1793
container_title International journal of performability engineering
container_volume 16
creator Tao, Wei
Xuezhuan, Zhao
Lishen, Pei
Lingling, Li
description Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of these, we propose a novel salient object detection model for video sequences, which combines the dynamic saliency and the static saliency into a co-saliency map. First, the salient degree of the general objects in each frame was estimated by the motion-independent algorithm, and the global static saliency map was generated based on the results. Next, the dynamic regions were detected by an improved motion-based approach, and the dynamic saliency map was computed with a local saliency detection method according to the related dynamic regions and the visual fixation map. Finally, a novel co-saliency algorithm was devised to fuse the static and dynamic maps. The final hierarchical co-saliency map reflects the saliency of both dynamic and static objects, and it satisfies the demand of more advanced tasks. Through the evaluation on two existing datasets, it is proven that the proposed model can achieve state-of-the-art performance.
doi_str_mv 10.23940/ijpe.20.11.p11.17931802
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2475132725</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2475132725</sourcerecordid><originalsourceid>FETCH-LOGICAL-c147t-b4b2e8c7c6cca1ce7525d306694296f4cb2126f275bb7e7ca85dfe873ab84d4c3</originalsourceid><addsrcrecordid>eNo1kMtOwzAQRb0AifL4B0usEzxjJ3aWVXlKRV0U2FqxM5ESlTrY6aJ_j6GwGF2NdDRXcxjjIEqUjRJ3wzhRiaIEKKc8oBsJRuAZW4hGywLydsEuUxqFqFEjLJhZ8lUotu1uoL0_8o0byc_8nuYcQ9jz19DRjvch8o-ho8C39HXIJKVrdt63u0Q3f3nF3h8f3lbPxXrz9LJargsPSs-FUw7JeO1r71vwpCusOinqulHY1L3yDgHrHnXlnCbtW1N1PRktW2dUp7y8Yrenu1MMuTrNdgyHuM-VFpWuQOY_qkyZE-VjSClSb6c4fLbxaEHYXzf2x41FYQFsdmP_3chv-qFaBQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2475132725</pqid></control><display><type>article</type><title>A Co-Saliency Object Detection Model for Video Sequences</title><source>Alma/SFX Local Collection</source><creator>Tao, Wei ; Xuezhuan, Zhao ; Lishen, Pei ; Lingling, Li</creator><creatorcontrib>Tao, Wei ; Xuezhuan, Zhao ; Lishen, Pei ; Lingling, Li</creatorcontrib><description>Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of these, we propose a novel salient object detection model for video sequences, which combines the dynamic saliency and the static saliency into a co-saliency map. First, the salient degree of the general objects in each frame was estimated by the motion-independent algorithm, and the global static saliency map was generated based on the results. Next, the dynamic regions were detected by an improved motion-based approach, and the dynamic saliency map was computed with a local saliency detection method according to the related dynamic regions and the visual fixation map. Finally, a novel co-saliency algorithm was devised to fuse the static and dynamic maps. The final hierarchical co-saliency map reflects the saliency of both dynamic and static objects, and it satisfies the demand of more advanced tasks. Through the evaluation on two existing datasets, it is proven that the proposed model can achieve state-of-the-art performance.</description><identifier>ISSN: 0973-1318</identifier><identifier>DOI: 10.23940/ijpe.20.11.p11.17931802</identifier><language>eng</language><publisher>Jaipur: RAMS Consultants</publisher><subject>Algorithms ; Dynamic programming ; Object recognition ; Salience ; Sensors ; Static objects</subject><ispartof>International journal of performability engineering, 2020-11, Vol.16 (11), p.1793</ispartof><rights>Copyright RAMS Consultants Nov 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><creatorcontrib>Tao, Wei</creatorcontrib><creatorcontrib>Xuezhuan, Zhao</creatorcontrib><creatorcontrib>Lishen, Pei</creatorcontrib><creatorcontrib>Lingling, Li</creatorcontrib><title>A Co-Saliency Object Detection Model for Video Sequences</title><title>International journal of performability engineering</title><description>Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of these, we propose a novel salient object detection model for video sequences, which combines the dynamic saliency and the static saliency into a co-saliency map. First, the salient degree of the general objects in each frame was estimated by the motion-independent algorithm, and the global static saliency map was generated based on the results. Next, the dynamic regions were detected by an improved motion-based approach, and the dynamic saliency map was computed with a local saliency detection method according to the related dynamic regions and the visual fixation map. Finally, a novel co-saliency algorithm was devised to fuse the static and dynamic maps. The final hierarchical co-saliency map reflects the saliency of both dynamic and static objects, and it satisfies the demand of more advanced tasks. Through the evaluation on two existing datasets, it is proven that the proposed model can achieve state-of-the-art performance.</description><subject>Algorithms</subject><subject>Dynamic programming</subject><subject>Object recognition</subject><subject>Salience</subject><subject>Sensors</subject><subject>Static objects</subject><issn>0973-1318</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNo1kMtOwzAQRb0AifL4B0usEzxjJ3aWVXlKRV0U2FqxM5ESlTrY6aJ_j6GwGF2NdDRXcxjjIEqUjRJ3wzhRiaIEKKc8oBsJRuAZW4hGywLydsEuUxqFqFEjLJhZ8lUotu1uoL0_8o0byc_8nuYcQ9jz19DRjvch8o-ho8C39HXIJKVrdt63u0Q3f3nF3h8f3lbPxXrz9LJargsPSs-FUw7JeO1r71vwpCusOinqulHY1L3yDgHrHnXlnCbtW1N1PRktW2dUp7y8Yrenu1MMuTrNdgyHuM-VFpWuQOY_qkyZE-VjSClSb6c4fLbxaEHYXzf2x41FYQFsdmP_3chv-qFaBQ</recordid><startdate>20201101</startdate><enddate>20201101</enddate><creator>Tao, Wei</creator><creator>Xuezhuan, Zhao</creator><creator>Lishen, Pei</creator><creator>Lingling, Li</creator><general>RAMS Consultants</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7TA</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JG9</scope></search><sort><creationdate>20201101</creationdate><title>A Co-Saliency Object Detection Model for Video Sequences</title><author>Tao, Wei ; Xuezhuan, Zhao ; Lishen, Pei ; Lingling, Li</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c147t-b4b2e8c7c6cca1ce7525d306694296f4cb2126f275bb7e7ca85dfe873ab84d4c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Dynamic programming</topic><topic>Object recognition</topic><topic>Salience</topic><topic>Sensors</topic><topic>Static objects</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tao, Wei</creatorcontrib><creatorcontrib>Xuezhuan, Zhao</creatorcontrib><creatorcontrib>Lishen, Pei</creatorcontrib><creatorcontrib>Lingling, Li</creatorcontrib><collection>CrossRef</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Materials Research Database</collection><jtitle>International journal of performability engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tao, Wei</au><au>Xuezhuan, Zhao</au><au>Lishen, Pei</au><au>Lingling, Li</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Co-Saliency Object Detection Model for Video Sequences</atitle><jtitle>International journal of performability engineering</jtitle><date>2020-11-01</date><risdate>2020</risdate><volume>16</volume><issue>11</issue><spage>1793</spage><pages>1793-</pages><issn>0973-1318</issn><abstract>Whilst existing research mainly focus on detecting the saliency of dynamic objects based on spatiotemporal features, it is also meaningful to detect the saliency of static objects and label their salient values on the video saliency map, a useful tool for many high-level applications. In view of these, we propose a novel salient object detection model for video sequences, which combines the dynamic saliency and the static saliency into a co-saliency map. First, the salient degree of the general objects in each frame was estimated by the motion-independent algorithm, and the global static saliency map was generated based on the results. Next, the dynamic regions were detected by an improved motion-based approach, and the dynamic saliency map was computed with a local saliency detection method according to the related dynamic regions and the visual fixation map. Finally, a novel co-saliency algorithm was devised to fuse the static and dynamic maps. The final hierarchical co-saliency map reflects the saliency of both dynamic and static objects, and it satisfies the demand of more advanced tasks. Through the evaluation on two existing datasets, it is proven that the proposed model can achieve state-of-the-art performance.</abstract><cop>Jaipur</cop><pub>RAMS Consultants</pub><doi>10.23940/ijpe.20.11.p11.17931802</doi></addata></record>
fulltext fulltext
identifier ISSN: 0973-1318
ispartof International journal of performability engineering, 2020-11, Vol.16 (11), p.1793
issn 0973-1318
language eng
recordid cdi_proquest_journals_2475132725
source Alma/SFX Local Collection
subjects Algorithms
Dynamic programming
Object recognition
Salience
Sensors
Static objects
title A Co-Saliency Object Detection Model for Video Sequences
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T17%3A32%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Co-Saliency%20Object%20Detection%20Model%20for%20Video%20Sequences&rft.jtitle=International%20journal%20of%20performability%20engineering&rft.au=Tao,%20Wei&rft.date=2020-11-01&rft.volume=16&rft.issue=11&rft.spage=1793&rft.pages=1793-&rft.issn=0973-1318&rft_id=info:doi/10.23940/ijpe.20.11.p11.17931802&rft_dat=%3Cproquest_cross%3E2475132725%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c147t-b4b2e8c7c6cca1ce7525d306694296f4cb2126f275bb7e7ca85dfe873ab84d4c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2475132725&rft_id=info:pmid/&rfr_iscdi=true