Loading…
A Simple and Light-Weight Attention Module for Convolutional Neural Networks
Many aspects of deep neural networks, such as depth, width, or cardinality, have been studied to strengthen the representational power. In this work, we study the effect of attention in convolutional neural networks and present our idea in a simple self-contained module, called Bottleneck Attention...
Saved in:
Published in: | International journal of computer vision 2020-04, Vol.128 (4), p.783-798 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many aspects of deep neural networks, such as depth, width, or cardinality, have been studied to strengthen the representational power. In this work, we study the effect of
attention
in convolutional neural networks and present our idea in a simple self-contained module, called
Bottleneck Attention Module
(BAM). Given an intermediate feature map, BAM efficiently produces the attention map along two factorized axes,
channel
and
spatial
, with negligible overheads. BAM is placed at
bottlenecks
of various models where the downsampling of feature maps occurs, and is jointly trained in an end-to-end manner. Ablation studies and extensive experiments are conducted in CIFAR-100/ImageNet classification, VOC2007/MS-COCO detection, super resolution and scene parsing with various architectures including mobile-oriented networks. BAM shows consistent improvements over all experiments, demonstrating the wide applicability of BAM. The code and models are available at
https://github.com/Jongchan/attentionmodule
. |
---|---|
ISSN: | 0920-5691 1573-1405 |
DOI: | 10.1007/s11263-019-01283-0 |