Loading…

HyperPocket: Generative Point Cloud Completion

Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applic...

Full description

Saved in:
Bibliographic Details
Main Authors: Spurek, P., Kasymov, A., Mazur, M., Janik, D., Tadeja, S.K., Struski, L., Tabor, J., Trzcinski, T.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an objects hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications.
ISSN:2153-0866
DOI:10.1109/IROS47612.2022.9981829