Loading…

Multiple geometry representations for 6D object pose estimation in occluded or truncated scenes

•A novel 6D object pose estimation method based on multiple geometry representations.•A two-stage pose regression module is applied to compute the 6D pose of an object.•Capabilities of handling textureless objects in occluded or truncated scenes. Deep learning-based 6D object pose estimation methods...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2022-12, Vol.132, p.108903, Article 108903
Main Authors: Wang, Jichun, Qiu, Lemiao, Yi, Guodong, Zhang, Shuyou, Wang, Yang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A novel 6D object pose estimation method based on multiple geometry representations.•A two-stage pose regression module is applied to compute the 6D pose of an object.•Capabilities of handling textureless objects in occluded or truncated scenes. Deep learning-based 6D object pose estimation methods from a single RGBD image have recently received increasing attention because of their powerful representation learning capabilities. These methods, however, cannot handle severe occlusion and truncation. In this paper, we present a novel 6D object pose estimation method based on multiple geometry representations. Specifically, we introduce a network to fuse the appearance and geometry features extracted from input color and depth images. Then, we utilize these per-point fusion features to estimate keypoint offsets, edge vectors, and dense symmetry correspondences in the canonical coordinate system. Finally, a two-stage pose regression module is applied to compute the 6D pose of an object. Relative to the unitary 3D keypoint-based strategy, such combination of multiple geometry representations provides sufficient and diverse information, especially for occluded or truncated scenes. To show the robustness to occlusion and truncation of the proposed method, we conduct comparative experiments on the Occlusion LineMOD, Truncation LineMOD, and T-LESS datasets. Results reveal that the proposed method outperforms state-of-the-art techniques by a large margin.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.108903