Loading…
Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction
In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as conspicuity maps ) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mec...
Saved in:
Published in: | International journal of computer vision 2021-12, Vol.129 (12), p.3216-3232 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73 |
---|---|
cites | cdi_FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73 |
container_end_page | 3232 |
container_issue | 12 |
container_start_page | 3216 |
container_title | International journal of computer vision |
container_volume | 129 |
creator | Bellitto, G. Proietto Salanitri, F. Palazzo, S. Rundo, F. Giordano, D. Spampinato, C. |
description | In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as
conspicuity maps
) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for
domain adaptation
and
domain-specific learning
. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at
https://github.com/perceivelab/hd2s
. |
doi_str_mv | 10.1007/s11263-021-01519-y |
format | article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2588179696</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A680694526</galeid><sourcerecordid>A680694526</sourcerecordid><originalsourceid>FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73</originalsourceid><addsrcrecordid>eNp9kU1rGzEQhkVIIY7bP9DTQk45KJW0q6-jSePGwdDStL2KsXbWkbElV1pD_O-76QaCL2EOA8PzzAy8hHzm7IYzpr8UzoWqKROcMi65pcczMuFS15Q3TJ6TCbOCUaksvyCXpWwYY8KIekIe7gNmyP4peNhWX9MOQqSzFvY9ttUcoT9krJYIOYa4rrqUqz-hxVQ9wjZg9MfqR8Y2-D6k-JF86GBb8NNrn5Lf87tft_d0-f3b4na2pF4y0VPL2rqzWmuDHXLjEWHVGuuVFbJuuDeN9CvQEq2pVwhaNcYaYxtQvgO_0vWUXI179zn9PWDp3SYdchxOOiGN4doqqwbqZqTWsEUXYpf6DH6oFnfBp4hdGOYzZZiyjRQvwvWJMDA9PvdrOJTiFo8_T1kxsj6nUjJ2bp_DDvLRceZeAnFjIG4IxP0PxB0HqR6lMsBxjfnt73esfwrWjSw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2588179696</pqid></control><display><type>article</type><title>Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction</title><source>ABI/INFORM Global</source><source>Springer Nature</source><creator>Bellitto, G. ; Proietto Salanitri, F. ; Palazzo, S. ; Rundo, F. ; Giordano, D. ; Spampinato, C.</creator><creatorcontrib>Bellitto, G. ; Proietto Salanitri, F. ; Palazzo, S. ; Rundo, F. ; Giordano, D. ; Spampinato, C.</creatorcontrib><description>In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as
conspicuity maps
) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for
domain adaptation
and
domain-specific learning
. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at
https://github.com/perceivelab/hd2s
.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-021-01519-y</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Adaptation ; Annotations ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Conspicuity ; Datasets ; Feature extraction ; Image Processing and Computer Vision ; Learning ; Model accuracy ; Pattern Recognition ; Pattern Recognition and Graphics ; Salience ; Source code ; Vision</subject><ispartof>International journal of computer vision, 2021-12, Vol.129 (12), p.3216-3232</ispartof><rights>The Author(s) 2021</rights><rights>COPYRIGHT 2021 Springer</rights><rights>The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73</citedby><cites>FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73</cites><orcidid>0000-0002-2441-0982</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2588179696/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2588179696?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,11688,27924,27925,36060,44363,74895</link.rule.ids></links><search><creatorcontrib>Bellitto, G.</creatorcontrib><creatorcontrib>Proietto Salanitri, F.</creatorcontrib><creatorcontrib>Palazzo, S.</creatorcontrib><creatorcontrib>Rundo, F.</creatorcontrib><creatorcontrib>Giordano, D.</creatorcontrib><creatorcontrib>Spampinato, C.</creatorcontrib><title>Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as
conspicuity maps
) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for
domain adaptation
and
domain-specific learning
. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at
https://github.com/perceivelab/hd2s
.</description><subject>Adaptation</subject><subject>Annotations</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Conspicuity</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Image Processing and Computer Vision</subject><subject>Learning</subject><subject>Model accuracy</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Salience</subject><subject>Source code</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kU1rGzEQhkVIIY7bP9DTQk45KJW0q6-jSePGwdDStL2KsXbWkbElV1pD_O-76QaCL2EOA8PzzAy8hHzm7IYzpr8UzoWqKROcMi65pcczMuFS15Q3TJ6TCbOCUaksvyCXpWwYY8KIekIe7gNmyP4peNhWX9MOQqSzFvY9ttUcoT9krJYIOYa4rrqUqz-hxVQ9wjZg9MfqR8Y2-D6k-JF86GBb8NNrn5Lf87tft_d0-f3b4na2pF4y0VPL2rqzWmuDHXLjEWHVGuuVFbJuuDeN9CvQEq2pVwhaNcYaYxtQvgO_0vWUXI179zn9PWDp3SYdchxOOiGN4doqqwbqZqTWsEUXYpf6DH6oFnfBp4hdGOYzZZiyjRQvwvWJMDA9PvdrOJTiFo8_T1kxsj6nUjJ2bp_DDvLRceZeAnFjIG4IxP0PxB0HqR6lMsBxjfnt73esfwrWjSw</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Bellitto, G.</creator><creator>Proietto Salanitri, F.</creator><creator>Palazzo, S.</creator><creator>Rundo, F.</creator><creator>Giordano, D.</creator><creator>Spampinato, C.</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-2441-0982</orcidid></search><sort><creationdate>20211201</creationdate><title>Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction</title><author>Bellitto, G. ; Proietto Salanitri, F. ; Palazzo, S. ; Rundo, F. ; Giordano, D. ; Spampinato, C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation</topic><topic>Annotations</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Conspicuity</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Image Processing and Computer Vision</topic><topic>Learning</topic><topic>Model accuracy</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Salience</topic><topic>Source code</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bellitto, G.</creatorcontrib><creatorcontrib>Proietto Salanitri, F.</creatorcontrib><creatorcontrib>Palazzo, S.</creatorcontrib><creatorcontrib>Rundo, F.</creatorcontrib><creatorcontrib>Giordano, D.</creatorcontrib><creatorcontrib>Spampinato, C.</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bellitto, G.</au><au>Proietto Salanitri, F.</au><au>Palazzo, S.</au><au>Rundo, F.</au><au>Giordano, D.</au><au>Spampinato, C.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2021-12-01</date><risdate>2021</risdate><volume>129</volume><issue>12</issue><spage>3216</spage><epage>3232</epage><pages>3216-3232</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as
conspicuity maps
) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for
domain adaptation
and
domain-specific learning
. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at
https://github.com/perceivelab/hd2s
.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-021-01519-y</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-2441-0982</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2021-12, Vol.129 (12), p.3216-3232 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_journals_2588179696 |
source | ABI/INFORM Global; Springer Nature |
subjects | Adaptation Annotations Artificial Intelligence Computer Imaging Computer Science Conspicuity Datasets Feature extraction Image Processing and Computer Vision Learning Model accuracy Pattern Recognition Pattern Recognition and Graphics Salience Source code Vision |
title | Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T05%3A01%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hierarchical%20Domain-Adapted%20Feature%20Learning%20for%20Video%20Saliency%20Prediction&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Bellitto,%20G.&rft.date=2021-12-01&rft.volume=129&rft.issue=12&rft.spage=3216&rft.epage=3232&rft.pages=3216-3232&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-021-01519-y&rft_dat=%3Cgale_proqu%3EA680694526%3C/gale_proqu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c502t-90d3f97778efe18ceeabd89c6925341c845cba75e983bea764898894a6cfacb73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2588179696&rft_id=info:pmid/&rft_galeid=A680694526&rfr_iscdi=true |