Loading…
Transforming a 3-D LiDAR Point Cloud Into a 2-D Dense Depth Map Through a Parameter Self-Adaptive Framework
The 3-D LiDAR scanner and the 2-D charge-coupled device (CCD) camera are two typical types of sensors for surrounding-environment perceiving in robotics or autonomous driving. Commonly, they are jointly used to improve perception accuracy by simultaneously recording the distances of surrounding obje...
Saved in:
Published in: | IEEE transactions on intelligent transportation systems 2017-01, Vol.18 (1), p.165-176 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The 3-D LiDAR scanner and the 2-D charge-coupled device (CCD) camera are two typical types of sensors for surrounding-environment perceiving in robotics or autonomous driving. Commonly, they are jointly used to improve perception accuracy by simultaneously recording the distances of surrounding objects, as well as the color and shape information. In this paper, we use the correspondence between a 3-D LiDAR scanner and a CCD camera to rearrange the captured LiDAR point cloud into a dense depth map, in which each 3-D point corresponds to a pixel at the same location in the RGB image. In this paper, we assume that the LiDAR scanner and the CCD camera are accurately calibrated and synchronized beforehand so that each 3-D LiDAR point cloud is aligned with its corresponding RGB image. Each frame of the LiDAR point cloud is then projected onto the RGB image plane to form a sparse depth map. Then, a self-adaptive method is proposed to upsample the sparse depth map into a dense depth map, in which the RGB image and the anisotropic diffusion tensor are exploited to guide upsampling by reinforcing the RGB-depth compactness. Finally, convex optimization is applied on the dense depth map for global enhancement. Experiments on the KITTI and Middlebury data sets demonstrate that the proposed method outperforms several other relevant state-of-the-art methods in terms of visual comparison and root-mean-square error measurement. |
---|---|
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2016.2564640 |