Loading…
Semantic Segmentation for Free Space and Lane Based on Grid-Based Interest Point Detection
An increasing number of tasks have been developed for autonomous driving and advanced driver assistance systems. However, this gives rise to the problem of incorporating plural functionalities to be ported into a power-constrained computing device. Therefore, the objective of this work is to allevia...
Saved in:
Published in: | IEEE transactions on intelligent transportation systems 2022-07, Vol.23 (7), p.8498-8512 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | An increasing number of tasks have been developed for autonomous driving and advanced driver assistance systems. However, this gives rise to the problem of incorporating plural functionalities to be ported into a power-constrained computing device. Therefore, the objective of this work is to alleviate the complex learning procedure of the pixel-wise approach for driving scene understanding. In this paper, we go beyond the pixel-wise detection of the semantic segmentation task as a point detection task and implement it to detect free space and lane. Instead of pixel-wise learning, we trained a single deep convolution neural network for point of interest detection in a grid-based level and followed with a computer vision (CV) based post-processing of end branches corresponding to the characteristic of target classes. To achieve the corresponding final result of pixel-wise detection of semantic segmentation and parametric description of lanes, we propose a CV-based post-processing to decode points of output from the neural network. The final results showed that the network could learn the spatial relationship for point of interest, including the representative points on the contour of the free space segmented region and the representative points along the center of the road lane. We verify our method on two publicly available datasets, which achieved 98.2% mIoU on the KITTI dataset for the evaluation of free space and 97.8% accuracy on the TuSimple dataset (with the field of view below the y=320 axis) for the evaluation of the lane. |
---|---|
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2021.3083526 |