Loading…

DRMNet: A Multi-Task Detection Model Based on Image Processing for Autonomous Driving Scenarios

To improve the efficiency and accuracy of autonomous driving vehicles' perception of the external environment, a multi-task detection model DRMNet (Dual-resolution Multi-task Network) is proposed that can be applied to autonomous driving scenarios, which can simultaneously complete the tasks of...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on vehicular technology 2023-12, Vol.72 (12), p.1-16
Main Authors: Zhao, Jiandong, Wu, Di, Yu, Zhixin, Gao, Ziyou
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To improve the efficiency and accuracy of autonomous driving vehicles' perception of the external environment, a multi-task detection model DRMNet (Dual-resolution Multi-task Network) is proposed that can be applied to autonomous driving scenarios, which can simultaneously complete the tasks of vehicle detection, lane detection, and drivable area detection. Firstly, given the loss of feature information by multiple downsampling in the backbone feature network, which affects the detection accuracy, the backbone of the model is designed as a two-pathway structure, which is used to extract shallow detail information and deep semantic information, respectively. Secondly, a multi-scale feature fusion module (MFFM) is designed to fuse the extracted shallow detail and deep semantic information. Then, different detection branches are designed according to the different characteristics of each detection task. Finally, Experiments on the BDD100K show the performance of DRMNet in three detection tasks: The recall and mAP of vehicle detection are 93.9% and 80.0% respectively. The accuracy of lane detection is 76.3%. The mIoU of drivable area detection is 92.2%. It is superior to the existing multi-task algorithm model, and the model has good generalization ability through actual scene experiments.
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2023.3296735