Loading…

Fusing vision and LIDAR - Synchronization, correction and occlusion reasoning

Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors. The problem has been addressed from both, the computer vision community as...

Full description

Saved in:
Bibliographic Details
Main Authors: Schneider, Sebastian, Himmelsbach, Michael, Luettel, Thorsten, Wuensche, Hans-Joachim
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors. The problem has been addressed from both, the computer vision community as well as from researchers working with laser range finding technology, like the Velodyne HDL-64. Since cameras and LIDAR sensors complement one another in terms of color and depth perception, the fusion of both sensors is reasonable in order to provide color images with depth and reflectance information as well as 3D LIDAR point clouds with color information. In this paper we propose a method for sensor synchronization, especially designed for dynamic scenes, a low-level fusion of the data of both sensors and we provide a solution for the occlusion problem that arises in conjunction with different viewpoints of the fusioned sensors.
ISSN:1931-0587
2642-7214
DOI:10.1109/IVS.2010.5548079