Loading…

Overlap-Aware Hierarchical Decoder for point cloud registration

Extracting high-quality correspondences is a critical challenge in current feature-learning based point cloud registration methods. Recently, coarse-to-fine network structures have shown great potential in addressing this challenge. Inspired by such structures, we investigate the effectiveness of tw...

Full description

Saved in:
Bibliographic Details
Published in:Journal of King Saud University. Computer and information sciences 2024-02, Vol.36 (2), p.101941, Article 101941
Main Authors: Zhang, Qiang, Wang, Dongqiang, Yue, Qianwen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Extracting high-quality correspondences is a critical challenge in current feature-learning based point cloud registration methods. Recently, coarse-to-fine network structures have shown great potential in addressing this challenge. Inspired by such structures, we investigate the effectiveness of two-stage network optimization in matching and propose a non-keypoint registration model called Overlap-Aware Hierarchical Decoder (OAH-Net). The construction of this model focuses on reducing outliers in the matching results and improving the understanding of geometric transformation invariance. To achieve this goal, we propose a point-to-point perception module for encoding the paired point clouds and perceiving the overlapping regions, as well as a pyramid hierarchical decoder for decoding multi-level features. We design an optimal matching mechanism adapted to the pyramid structure to handle accurate correspondences from multi-level matching, thereby improving the accuracy of the final registration. Experimental evaluations conducted on indoor and outdoor benchmark tests validate the outstanding performance of OAH-Net, which not only ensures low parameter count and computational efficiency but also demonstrates accuracy and stability comparable to state-of-the-art models.
ISSN:1319-1578
2213-1248
DOI:10.1016/j.jksuci.2024.101941