Loading…
Cross-Level Attentive Feature Aggregation for Change Detection
This article studies change detection within pairs of optical images remotely sensed from overhead views. We consider that a high-performance solution to this task entails highly effective multi-level feature interaction. With that in mind, we propose a novel approach characterized by two attentive...
Saved in:
Published in: | IEEE transactions on circuits and systems for video technology 2024-07, Vol.34 (7), p.6051-6062 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This article studies change detection within pairs of optical images remotely sensed from overhead views. We consider that a high-performance solution to this task entails highly effective multi-level feature interaction. With that in mind, we propose a novel approach characterized by two attentive feature aggregation schemes that handle cross-level features in different processes. For the Siamese-based feature extraction of the bi-temporal image pair, we attach emphasis on constructing semantically strong and contextually rich pyramidal feature representations to enable comprehensive matching and differencing. To this end, we leverage a feature pyramid network and re-formulate its cross-level feature merging procedure as top-down modulation with multiplicative channel attention and additive gated attention. For the multi-level difference feature fusion, we progressively fuse the derived difference feature pyramid in an attend-then-filter manner. This makes the high-level fused features and the adjacent lower-level difference features constrain each other, and thus allows steady feature fusion for specifying change regions. In addition, we build an upsampling head as a replacement for the normal heads followed by static upsampling. Our implementation contains a stack of upsampling modules that allocate features for each pixel. Each has a learnable branch that produces attentive residuals for refining the statically upsampled results. We conduct extensive experiments on four public datasets and results show that our approach achieves state-of-the-art performance. Code is available at https://github.com/xingronaldo/CLAFA . |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2023.3344092 |