Loading…
HyperPocket: Generative Point Cloud Completion
Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applic...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 6853 |
container_issue | |
container_start_page | 6848 |
container_title | |
container_volume | |
creator | Spurek, P. Kasymov, A. Mazur, M. Janik, D. Tadeja, S.K. Struski, L. Tabor, J. Trzcinski, T. |
description | Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an objects hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications. |
doi_str_mv | 10.1109/IROS47612.2022.9981829 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9981829</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9981829</ieee_id><sourcerecordid>9981829</sourcerecordid><originalsourceid>FETCH-LOGICAL-i251t-fd4ba5022d31a1448d3edea4dcd5cab6eb1f192ba5a3d47a3c698fe652a722a73</originalsourceid><addsrcrecordid>eNotj81Kw0AUhUdBsLZ9AkHyAolz7_y7k6BtodBidV0mmRsYTZOQRKFvb6BdHM7m4_Adxp6AZwDcPW8-dgdpNGCGHDFzzoJFd8MeQGsljUODt2yGoETKrdb3bDkM35xz4MZZp2csW5876vdt-UPjS7Kihno_xj9K9m1sxiSv29-Q5O2pq2mMbbNgd5WvB1pee86-3t8-83W63a02-es2jahgTKsgC68moyDAg5Q2CArkZSiDKn2hqYAKHE6MF0EaL0rtbEVaoTc4RczZ42U3EtGx6-PJ9-fj9Z74B1ptROs</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>HyperPocket: Generative Point Cloud Completion</title><source>IEEE Xplore All Conference Series</source><creator>Spurek, P. ; Kasymov, A. ; Mazur, M. ; Janik, D. ; Tadeja, S.K. ; Struski, L. ; Tabor, J. ; Trzcinski, T.</creator><creatorcontrib>Spurek, P. ; Kasymov, A. ; Mazur, M. ; Janik, D. ; Tadeja, S.K. ; Struski, L. ; Tabor, J. ; Trzcinski, T.</creatorcontrib><description>Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an objects hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications.</description><identifier>EISSN: 2153-0866</identifier><identifier>EISBN: 1665479272</identifier><identifier>EISBN: 9781665479271</identifier><identifier>DOI: 10.1109/IROS47612.2022.9981829</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computer architecture ; Computer vision ; Intelligent robots ; Point cloud compression ; Task analysis ; Three-dimensional displays</subject><ispartof>2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, p.6848-6853</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9981829$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9981829$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Spurek, P.</creatorcontrib><creatorcontrib>Kasymov, A.</creatorcontrib><creatorcontrib>Mazur, M.</creatorcontrib><creatorcontrib>Janik, D.</creatorcontrib><creatorcontrib>Tadeja, S.K.</creatorcontrib><creatorcontrib>Struski, L.</creatorcontrib><creatorcontrib>Tabor, J.</creatorcontrib><creatorcontrib>Trzcinski, T.</creatorcontrib><title>HyperPocket: Generative Point Cloud Completion</title><title>2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title><addtitle>IROS</addtitle><description>Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an objects hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications.</description><subject>Computer architecture</subject><subject>Computer vision</subject><subject>Intelligent robots</subject><subject>Point cloud compression</subject><subject>Task analysis</subject><subject>Three-dimensional displays</subject><issn>2153-0866</issn><isbn>1665479272</isbn><isbn>9781665479271</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj81Kw0AUhUdBsLZ9AkHyAolz7_y7k6BtodBidV0mmRsYTZOQRKFvb6BdHM7m4_Adxp6AZwDcPW8-dgdpNGCGHDFzzoJFd8MeQGsljUODt2yGoETKrdb3bDkM35xz4MZZp2csW5876vdt-UPjS7Kihno_xj9K9m1sxiSv29-Q5O2pq2mMbbNgd5WvB1pee86-3t8-83W63a02-es2jahgTKsgC68moyDAg5Q2CArkZSiDKn2hqYAKHE6MF0EaL0rtbEVaoTc4RczZ42U3EtGx6-PJ9-fj9Z74B1ptROs</recordid><startdate>20221023</startdate><enddate>20221023</enddate><creator>Spurek, P.</creator><creator>Kasymov, A.</creator><creator>Mazur, M.</creator><creator>Janik, D.</creator><creator>Tadeja, S.K.</creator><creator>Struski, L.</creator><creator>Tabor, J.</creator><creator>Trzcinski, T.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20221023</creationdate><title>HyperPocket: Generative Point Cloud Completion</title><author>Spurek, P. ; Kasymov, A. ; Mazur, M. ; Janik, D. ; Tadeja, S.K. ; Struski, L. ; Tabor, J. ; Trzcinski, T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i251t-fd4ba5022d31a1448d3edea4dcd5cab6eb1f192ba5a3d47a3c698fe652a722a73</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer architecture</topic><topic>Computer vision</topic><topic>Intelligent robots</topic><topic>Point cloud compression</topic><topic>Task analysis</topic><topic>Three-dimensional displays</topic><toplevel>online_resources</toplevel><creatorcontrib>Spurek, P.</creatorcontrib><creatorcontrib>Kasymov, A.</creatorcontrib><creatorcontrib>Mazur, M.</creatorcontrib><creatorcontrib>Janik, D.</creatorcontrib><creatorcontrib>Tadeja, S.K.</creatorcontrib><creatorcontrib>Struski, L.</creatorcontrib><creatorcontrib>Tabor, J.</creatorcontrib><creatorcontrib>Trzcinski, T.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Spurek, P.</au><au>Kasymov, A.</au><au>Mazur, M.</au><au>Janik, D.</au><au>Tadeja, S.K.</au><au>Struski, L.</au><au>Tabor, J.</au><au>Trzcinski, T.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>HyperPocket: Generative Point Cloud Completion</atitle><btitle>2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</btitle><stitle>IROS</stitle><date>2022-10-23</date><risdate>2022</risdate><spage>6848</spage><epage>6853</epage><pages>6848-6853</pages><eissn>2153-0866</eissn><eisbn>1665479272</eisbn><eisbn>9781665479271</eisbn><abstract>Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an objects hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications.</abstract><pub>IEEE</pub><doi>10.1109/IROS47612.2022.9981829</doi><tpages>6</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2153-0866 |
ispartof | 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, p.6848-6853 |
issn | 2153-0866 |
language | eng |
recordid | cdi_ieee_primary_9981829 |
source | IEEE Xplore All Conference Series |
subjects | Computer architecture Computer vision Intelligent robots Point cloud compression Task analysis Three-dimensional displays |
title | HyperPocket: Generative Point Cloud Completion |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T12%3A46%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=HyperPocket:%20Generative%20Point%20Cloud%20Completion&rft.btitle=2022%20IEEE/RSJ%20International%20Conference%20on%20Intelligent%20Robots%20and%20Systems%20(IROS)&rft.au=Spurek,%20P.&rft.date=2022-10-23&rft.spage=6848&rft.epage=6853&rft.pages=6848-6853&rft.eissn=2153-0866&rft_id=info:doi/10.1109/IROS47612.2022.9981829&rft.eisbn=1665479272&rft.eisbn_list=9781665479271&rft_dat=%3Cieee_CHZPO%3E9981829%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i251t-fd4ba5022d31a1448d3edea4dcd5cab6eb1f192ba5a3d47a3c698fe652a722a73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9981829&rfr_iscdi=true |