Loading…

Semantic annotation of ground and vegetation types in 3D maps for autonomous underwater vehicle operation

The semantic annotation of 3D maps generated by an Autonomous Underwater Vehicle (AUV) is presented. Two different methods are used for this purpose. First, a fitting of large planar patches plus an analysis of the plane normals is proposed. Second, a local analysis of the normals on the point cloud...

Full description

Saved in:
Bibliographic Details
Main Authors: Pfingsthorn, M., Birk, A., Vaskevicius, N.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The semantic annotation of 3D maps generated by an Autonomous Underwater Vehicle (AUV) is presented. Two different methods are used for this purpose. First, a fitting of large planar patches plus an analysis of the plane normals is proposed. Second, a local analysis of the normals on the point cloud level is employed. Both methods complement each other. While the first captures large scale environment structures like the sea floor, cliffs, and (man-made) walls, the second is targeted at smaller, locally non-planar, elements like vegetation and rocks. The semantic 3D mapping is evaluated in a high-fidelity simulator where it is shown that the two methods are very fast and work as intended.
ISSN:0197-7385
DOI:10.23919/OCEANS.2011.6107122