Loading…
MDCS with fully encoding the information of local shape description for 3D Rigid Data matching
Local feature description is the fundamental research topic for 3D rigid data matching. However, how to achieve a good balanced performance of the local shape descriptor among descriptiveness, robustness, compactness and efficiency remains a challenging task. For this purpose, we propose a novel fea...
Saved in:
Published in: | Image and vision computing 2022-05, Vol.121, p.104421, Article 104421 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Local feature description is the fundamental research topic for 3D rigid data matching. However, how to achieve a good balanced performance of the local shape descriptor among descriptiveness, robustness, compactness and efficiency remains a challenging task. For this purpose, we propose a novel feature representation of 3D local surface called multi-view depth and contour signatures (MDCS). Key to MDCS descriptor is multi-view and multi-attribute description to provide a comprehensive and effective geometric information. Specifically, we first construct a repeatable Local Reference Frame (LRF) for the local surface to achieve rotation invariance. Then we integrate the depth information characterized in a local coordinate manner and the 2D contour cue derived from 3D-to-2D projection, forming the depth and contour signatures (DCS). Finally, MDCS is generated by concatenating all the DCS descriptors captured from three orthogonal view planes in the LRF into a vector. The performance of the MDCS method is evaluated on several data modalities (i.e., LiDAR, Kinect, and Space Time) with respect to Gaussian noise, varying mesh resolutions, clutter and occlusion. Experimental results and rigorous comparisons with the state-of-the-arts validate that our approach achieves the superior performance in terms of descriptiveness, robustness, compactness and efficiency. Moreover, we further demonstrate the feasibility of MDCS in matching of both LiDAR and Kinect point clouds for 3D vision applications and evaluate the generalization ability of the proposed method on real-world datasets.
•The mechanism relies on the integration of multi-view and multi-attribute strategy.•MDCS achieves high descriptiveness and strong robustness to clutter and occlusion.•MDCS is capable of high applicability on public datasets and the real-world point clouds. |
---|---|
ISSN: | 0262-8856 1872-8138 |
DOI: | 10.1016/j.imavis.2022.104421 |