Loading…

Learning to Reduce Information Bottleneck for Object Detection in Aerial Images

Object detection in aerial images is a critical and essential task in the fields of geoscience and remote sensing. Despite the popularity of computer vision methods in detecting objects, these methods have been faced with significant limitations of aerial images such as appearance occlusion and vari...

Full description

Saved in:
Bibliographic Details
Published in:IEEE geoscience and remote sensing letters 2023-01, Vol.20, p.1-1
Main Authors: Shen, Yuchen, Zhang, Dong, Song, Zhihao, Jiang, Xuesong, Ye, Qiaolin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Object detection in aerial images is a critical and essential task in the fields of geoscience and remote sensing. Despite the popularity of computer vision methods in detecting objects, these methods have been faced with significant limitations of aerial images such as appearance occlusion and variable object sizes. In this letter, we explore the limitations of conventional neck networks in object detection by analyzing information bottlenecks. We propose an enhanced neck network to address the information deficiency issue in current neck networks. Our proposed neck network, which serves as a bridge between the backbone network and the head network, comprises a global semantic network (GSNet) and a feature fusion refinement module (FRM). The GSNet is designed to perceive contextual surroundings and propagate discriminative knowledge through a bidirectional global pattern. The FRM is developed to exploit different levels of features to capture comprehensive location information. We validate the efficacy and efficiency of our approach through experiments conducted on two challenging datasets, DOTA and HRSC2016. Our method outperforms existing approaches in terms of accuracy and complexity, demonstrating the superiority of our proposed method.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2023.3264455