Loading…

YOLOAL: Focusing on the Object Location for Detection on Drone Imagery

Object detection in drone-captured scenarios, which can be considered as a task of detecting dense small objects, is still a challenge. Drones navigate at different altitudes, causing significant changes in the size of the detected objects and posing a challenge to the model. Additionally, it is nec...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023, Vol.11, p.128886-128897
Main Authors: Chen, Xinting, Yang, Wenzhu, Zeng, Shuang, Geng, Lei, Jiao, Yanyan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Object detection in drone-captured scenarios, which can be considered as a task of detecting dense small objects, is still a challenge. Drones navigate at different altitudes, causing significant changes in the size of the detected objects and posing a challenge to the model. Additionally, it is necessary to improve the ability of the object detection model to rapidly detect small dense objects. To address these issues, we propose YOLOAL, a model that emphasizes the location information of the objects. It incorporates a new attention mechanism called the Convolution and Coordinate Attention Module (CCAM) into its design. This mechanism performs better than traditional ones in dense small object scenes because it adds coordinates that help identify attention regions in such scenarios. Furthermore, our model uses a new loss function combined with the Efficient IoU (EIoU) and Alpha-IoU methods that achieve better results than the traditional approaches. The proposed model achieved state-of-the-art performance on the VisDrone and DOTA datasets. YOLOAL reaches an AP50 (average accuracy when Intersection over Union threshold is 0.5) of 63.6% and an mAP (average of 10 IoU thresholds, ranging from 0.5 to 0.95) of 40.8% at a real-time speed of 0.27 seconds on the VisDrone dataset, and the mAP on the DOTA dataset even reaches 39% on an NVIDIA A4000.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3332815