Loading…
ALODAD: An Anchor-Free Lightweight Object Detector for Autonomous Driving
Vision-based object detection is an essential component of autonomous driving. Because vehicles typically have limited on-board computing resources, a small-sized detection model is required. Simultaneously, high object detection accuracy and real-time inference detection speeds are required to ensu...
Saved in:
Published in: | IEEE access 2022, Vol.10, p.40701-40714 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Vision-based object detection is an essential component of autonomous driving. Because vehicles typically have limited on-board computing resources, a small-sized detection model is required. Simultaneously, high object detection accuracy and real-time inference detection speeds are required to ensure safety while driving. In this paper, an anchor-free lightweight object detector for autonomous driving called ALODAD is proposed. ALODAD incorporates an attention scheme into the lightweight neural network GhostNet and builds an anchor-free detection framework to achieve lower computational costs and provide parameters with high detection accuracy. Specifically, the lightweight backbone neural network integrates a convolutional block attention model that analyzes the valuable features from traffic scene images to generate an accurate bounding box, and then constructs feature pyramids for multi-scale object detection. The proposed method adds an intersection over union (IoU) branch to the decoupled detector to rank the vast number of candidate detections accurately. To increase the data diversity, data augmentation was used during training. Extensive experiments based on benchmarks demonstrate that the proposed method offers improved performance compared to the baseline. The proposed method can achieve an increased detection accuracy while meeting the real-time requirements of autonomous driving. The proposed method was compared with the YOLOv5 and RetinaNet models and 98.7% and 94.5% were obtained for the average precision metrics AP50 and AP75, respectively, on the BCTSDB dataset. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3166923 |