Loading…

Lightweight prohibited item detection method based on YOLOV4 for x-ray security inspection

In the area of public safety and crime prevention, some research based on deep learning has achieved success in the detection of prohibited items for x-ray security inspection. However, the number of parameters and computational consumption of most object detection methods based on deep learning are...

Full description

Saved in:
Bibliographic Details
Published in:Applied optics (2004) 2022-10, Vol.61 (28), p.8454
Main Authors: Liu, Dongming, Liu, Jianchang, Yuan, Peixin, Yu, Feng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the area of public safety and crime prevention, some research based on deep learning has achieved success in the detection of prohibited items for x-ray security inspection. However, the number of parameters and computational consumption of most object detection methods based on deep learning are huge, which makes the hardware requirements of these methods extremely high and limits their applications. In this paper, a lightweight prohibited item detection method based on YOLOV4 is proposed for x-ray security inspection. First, the MobilenetV3 is used to replace the backbone network of YOLOV4, and the depthwise separable convolution is used to optimize the neck and head of YOLOV4 to reduce the number of parameters and computational consumption. Second, an adaptive spatial-and-channel attention block is designed to optimize the neck of YOLOV4 in order to improve the feature extraction capability of our method and maintain the detection accuracy. Third, the focal loss is utilized to avoid the class imbalance problem during the training process. Finally, the method is evaluated on our real x-ray pseudocolor image dataset with YOLOV4 and YOLOV4-tiny. For the overall performance, the mean average precision of our method is 4.98% higher than YOLOV4-tiny and 0.07% lower than YOLOV4. The number of parameters and computational consumption of our method are slightly higher than YOLOV4-tiny and much lower than YOLOV4.
ISSN:1559-128X
2155-3165
DOI:10.1364/AO.467717