Loading…
Joint Learning for Scattered Point Cloud Understanding with Hierarchical Self-Distillation
Numerous point-cloud understanding techniques focus on whole entities and have succeeded in obtaining satisfactory results and limited sparsity tolerance. However, these methods are generally sensitive to incomplete point clouds that are scanned with flaws or large gaps. In this paper, we propose an...
Saved in:
Published in: | IEEE sensors journal 2025, p.1-1 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Numerous point-cloud understanding techniques focus on whole entities and have succeeded in obtaining satisfactory results and limited sparsity tolerance. However, these methods are generally sensitive to incomplete point clouds that are scanned with flaws or large gaps. In this paper, we propose an end-to-end architecture that compensates for and identifies partial point clouds on the fly. First, we propose a cascaded solution that integrates both the upstream masked autoencoder (MAE) and downstream understanding networks simultaneously, allowing the task-oriented downstream to identify the points generated by the completion-oriented upstream. These two streams complement each other, resulting in improved performance for both completion and downstream-dependent tasks. Second, to explicitly understand the predicted points' pattern, we introduce hierarchical self-distillation (HSD), which can be applied to any hierarchy-based point cloud methods. HSD ensures that the deepest classifier with a larger perceptual field of local kernels and longer code length provides additional regularization to intermediate ones rather than simply aggregating the multi-scale features, and therefore maximizing the mutual information (MI) between a teacher and students. The proposed HSD strategy is particularly well-suited for tasks involving scattered point clouds, wherein a singular prediction may yield imprecise outcomes due to the inherently irregular and sparse nature of the geometric shape being reconstructed. We show the advantage of the self-distillation process in the hyperspaces based on the information bottleneck principle. Our method achieves state-of-the-art on both classification and part segmentation tasks. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2024.3512496 |