Loading…
Effective pose estimation from point pairs
This paper presents a new technique for depth and motion estimation from image sequence, aiming at the model-based pose estimation problem. The key of the proposed technique is a novel depth estimation approach. It can compute directly the depths of model points in consecutive camera coordinate syst...
Saved in:
Published in: | Image and vision computing 2005-07, Vol.23 (7), p.651-660 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents a new technique for depth and motion estimation from image sequence, aiming at the model-based pose estimation problem. The key of the proposed technique is a novel depth estimation approach. It can compute directly the depths of model points in consecutive camera coordinate systems according to geometric relationships between a camera and model point pairs instead of individual model point. Based on the proposed depth estimation method, two strategies are discussed to tackle three different cases of camera motion. Both strategies first compute depths, independent of motion parameters, from two images. The difference between them is two or three images are required to estimate efficiently the camera motion. If the camera only translates, two images are needed to compute directly the translation. The strategy requiring three images is mainly for the case that the camera translates with large rotation, which is difficult to be recovered accurately from two images. However, if the camera translates with small rotation, then the two strategies are applicable. The main contributions of this paper are the proposed point pairs based depth estimation method and three images based strategy to recover large rotational motion. The presented technique is simple and appealing. Extensive experiments are performed on synthetic data and real images to demonstrate its efficiency and robustness. |
---|---|
ISSN: | 0262-8856 1872-8138 |
DOI: | 10.1016/j.imavis.2005.03.003 |