Loading…
Forward-backward visual saliency propagation in Deep NNs vs internal attentional mechanisms
Attention models in deep learning algorithms gained popularity in recent years. In this work, we propose an attention mechanism on the basis of visual saliency maps injected into the Deep Neural Network (DNN) to enhance regions in feature maps during forward-backward propagation in training, and onl...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Attention models in deep learning algorithms gained popularity in recent years. In this work, we propose an attention mechanism on the basis of visual saliency maps injected into the Deep Neural Network (DNN) to enhance regions in feature maps during forward-backward propagation in training, and only forward propagation in testing. The key idea is to spatially capture features associated to prominent regions in images and propagate them to deeper layers. During training, first, we take as backbone the well-known AlexNet architecture and then the ResNet architecture to solve the task of building identification of Mexican architecture. Our model equipped with the "external" visual saliency-based attention mechanism outperforms models armed with squeeze-and-excitation units and double-attention blocks. |
---|---|
ISSN: | 2154-512X |
DOI: | 10.1109/IPTA.2019.8936125 |