Loading…

Spatially-Aware Context Neural Networks

A variety of computer vision tasks benefit significantly from increasingly powerful deep convolutional neural networks. However, the inherently local property of convolution operations prevents most existing models from capturing long-range feature interactions for improved performances. In this pap...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2021, Vol.30, p.6906-6916
Main Authors: Ruan, Dongsheng, Shi, Yu, Wen, Jun, Zheng, Nenggan, Zheng, Min
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A variety of computer vision tasks benefit significantly from increasingly powerful deep convolutional neural networks. However, the inherently local property of convolution operations prevents most existing models from capturing long-range feature interactions for improved performances. In this paper, we propose a novel module, called Spatially-Aware Context (SAC) block, to learn spatially-aware contexts by capturing multi-mode global contextual semantics for sophisticated long-range dependencies modeling. We enable customized non-local feature interactions for each spatial position through re-weighted global context fusion in a non-normalized way. SAC is very lightweight and can be easily plugged into popular backbone models. Extensive experiments on COCO, ImageNet, and HICO-DET benchmarks show that our SAC block achieves significant performance improvements over existing baseline architectures while with a negligible computational burden increase. The results also demonstrate the exceptional effectiveness and scalability of the proposed approach on capturing long-range dependencies for object detection, segmentation, and image classification, outperforming a bank of state-of-the-art attention blocks.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2021.3097917