Loading…

Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer

Deep learning technology is widely used in lane detection, but applying this technology to conditions such as environmental occlusion and low light remains challenging. On the one hand, obtaining lane information before and after the occlusion in low-light conditions using an ordinary convolutional...

Full description

Saved in:
Bibliographic Details
Published in:Automatic control and computer sciences 2023-04, Vol.57 (2), p.143-153
Main Authors: Jianlou Lou, Liang, Feng, Qu, Zhaoyang, Li, Xiangyu, Chen, Keyu, He, Bochuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning technology is widely used in lane detection, but applying this technology to conditions such as environmental occlusion and low light remains challenging. On the one hand, obtaining lane information before and after the occlusion in low-light conditions using an ordinary convolutional neural network (CNN) is impossible. On the other hand, only a small amount of lane data (such as CULane) have been collected under low-light conditions, and the new data require considerable manual labeling. Given the above problems, we propose a double attention recurrent feature-shift aggregator (DARESA) module, which uses the prior knowledge of the lane shape in space and channel dimensions, and enriches the original lane features by repeatedly capturing pixel information across rows and columns. This indirectly increased the global feature information and ability of the network to extract feature fine-grained information. Moreover, we trained an unsupervised low-light style transfer model suitable for autonomous driving scenarios. The model transferred the daytime images in the CULane dataset to low-light images, eliminating the cost of manual labeling. In addition, adding an appropriate number of generated images to the training set can enhance the environmental adaptability of the lane detector, yielding better detection results than those achieved by using CULane only .
ISSN:0146-4116
1558-108X
DOI:10.3103/S0146411623020050