Loading…
PG-NeuS: Robust and Efficient Point Guidance for Multi-View Neural Surface Reconstruction
Recently, learning multi-view neural surface reconstruction with the supervision of point clouds or depth maps has been a promising way. However, due to weak perception and underutilization of prior information, current methods still struggle with the challenges of limited accuracy and excessive tim...
Saved in:
Published in: | IEEE transactions on visualization and computer graphics 2024-12, p.1-15 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recently, learning multi-view neural surface reconstruction with the supervision of point clouds or depth maps has been a promising way. However, due to weak perception and underutilization of prior information, current methods still struggle with the challenges of limited accuracy and excessive time complexity. In addition, prior data perturbation is also an important yet rarely considered issue, often resulting in distorted geometry. To address these challenges, we propose a novel point-guided method named PG-NeuS, which achieves accurate and efficient reconstruction while robustly coping with point noise. Specifically, the aleatoric uncertainty of the point cloud is modeled to capture the noise distribution, estimating the reliability of each point and enhancing robustness against noise. Moreover, a Neural Projection module is proposed to connect points and images, adding geometric constraints to the implicit surface and achieving more precise point guidance. To better compensate for geometric bias between volume rendering and point modeling, we additionally design a Bias network that leverages the geometric information in high-fidelity points to enhance detail representation. Benefiting from the effective point guidance, the proposed PG-NeuS achieves an 11x speed increase and a 33.3% accuracy improvement compared to NeuS on DTU, even with a lightweight network. Extensive experiments show that our method yields high-quality surfaces with high efficiency, especially for fine-grained details and smooth regions, outperforming the state-of-the-art methods. Moreover, it exhibits strong robustness to noisy data and sparse data. |
---|---|
ISSN: | 1077-2626 |
DOI: | 10.1109/TVCG.2024.3514748 |