Loading…

FUNNRAR: Hybrid rarity/learning visual saliency

Saliency models provide heatmaps highlighting the probability of each pixel to attract human gaze. To define image's important regions, features maps are extracted. The rarity, surprise or contrast are computed leading to conspicuity maps, showing important regions of each feature map. The fina...

Full description

Saved in:
Bibliographic Details
Main Authors: Marighetto, P., Abdelkader, I. Hadj, Duzelier, S., Decombas, M., Riche, N., Jakubowicz, J., Mancas, M., Gosselin, B., Laganiere, R.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Saliency models provide heatmaps highlighting the probability of each pixel to attract human gaze. To define image's important regions, features maps are extracted. The rarity, surprise or contrast are computed leading to conspicuity maps, showing important regions of each feature map. The final saliency map is obtained by merging these maps. The fusion process is usually a linear combination of the maps where the coefficients show their importance. We propose a novel generic fusion mechanism based on 1) using a rarity-based attention module and 2) using neural networks to achieve the fusion. The first layer of the NN merges the weighted feature maps into a saliency map. The second layer takes into account the spatial information. The approach is compared to 8 models using 4 different comparison metrics on open state-of-the-art databases.
ISSN:2381-8549
DOI:10.1109/ICIP.2016.7532866