Loading…

Scanline Resolution-Invariant Depth Completion Using a Single Image and Sparse LiDAR Point Cloud

Most existing deep learning-based depth completion methods are only suitable for high ( e . g . 64-scanline) resolution LiDAR measurements, and they usually fail to predict a reliable dense depth map with low resolution (4, 8, or 16-scanline) LiDAR. However, it is of great interest to reduce the num...

Full description

Saved in:
Bibliographic Details
Published in:IEEE robotics and automation letters 2021-10, Vol.6 (4), p.6961-6968
Main Authors: Ryu, Kwonyoung, Lee, Kang-il, Cho, Jegyeong, Yoon, Kuk-Jin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Most existing deep learning-based depth completion methods are only suitable for high ( e . g . 64-scanline) resolution LiDAR measurements, and they usually fail to predict a reliable dense depth map with low resolution (4, 8, or 16-scanline) LiDAR. However, it is of great interest to reduce the number of LiDAR channels in many aspects (cost, weight of a device, power consumption). In this letter, we propose a new depth completion framework with various LiDAR scanline resolutions, which performs as well as methods built for 64-scanline resolution LiDAR inputs. For this, we define a consistency loss between the predictions from LiDAR measurements of different scanline resolutions. ( i . e ., 4, 8, 16, 32-scanline LiDAR measurements) Also, we design a fusion module to integrate features from different modalities. Experiments show our proposed method outperforms the current state-of-the-art depth completion methods for input LiDAR measurements of low scanline resolution and performs comparably to the methods(models) for input LiDAR measurements of 64-scanline resolution on the KITTI benchmark dataset.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2021.3096499