Loading…
RGBD data based pose estimation: Why sensor fusion?
Performing high accurate pose estimation has been an attractive research area in the field of computer vision; hence, there are a plenty of algorithms proposed for this purpose. Starting with RGB or gray scale image data, methods utilizing data from 3D sensors, such as Time of Flight (TOF) or laser...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Performing high accurate pose estimation has been an attractive research area in the field of computer vision; hence, there are a plenty of algorithms proposed for this purpose. Starting with RGB or gray scale image data, methods utilizing data from 3D sensors, such as Time of Flight (TOF) or laser range finder, and later those based on RGBD data have emerged chronologically. Algorithms that exploit image data mainly rely on minimization of image plane error, i.e. the reprojection error. On the other hand, methods utilizing 3D measurements from depth sensors estimate object pose in order to minimize the Euclidean distance between these measurements. However, although errors in associated domains can be minimized effectively by such methods, the resultant pose estimates may not be of sufficient accuracy, when the dynamics of the object motion is ignored. At this point, the proposed 3D rigid pose estimation algorithm fuses measurements from vision (RGB) and depth sensors in a probabilistic manner using Extended Kalman Filter (EKF). It is shown that such a procedure increases pose estimation performance significantly compared to single sensor approaches. |
---|