Loading…

Partition-Based Point Cloud Completion Network with Density Refinement

In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual field...

Full description

Saved in:
Bibliographic Details
Published in:Entropy (Basel, Switzerland) Switzerland), 2023-07, Vol.25 (7), p.1018
Main Authors: Li, Jianxin, Si, Guannan, Liang, Xinyu, An, Zhaoliang, Tian, Pengxin, Zhou, Fengyu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c469t-89f763addbb999f50e4099864e568be7d8a2e6df85529b63c17c0c94025d085d3
container_end_page
container_issue 7
container_start_page 1018
container_title Entropy (Basel, Switzerland)
container_volume 25
creator Li, Jianxin
Si, Guannan
Liang, Xinyu
An, Zhaoliang
Tian, Pengxin
Zhou, Fengyu
description In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud's 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds.
doi_str_mv 10.3390/e25071018
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_e7216e64514849ee86feebfa26dca2f8</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A759041051</galeid><doaj_id>oai_doaj_org_article_e7216e64514849ee86feebfa26dca2f8</doaj_id><sourcerecordid>A759041051</sourcerecordid><originalsourceid>FETCH-LOGICAL-c469t-89f763addbb999f50e4099864e568be7d8a2e6df85529b63c17c0c94025d085d3</originalsourceid><addsrcrecordid>eNpdkk1v1DAQhiMEoqVw4A-gSFzgkOLv2CdUFloqVVAhOFuOPd56SextnFD13-Ow7apFtmRr5vHree2pqtcYHVOq0AcgHLUYYfmkOsRIqYZRhJ4-2B9UL3LeIEQoweJ5dUBbXlKCH1anl2acwhRSbD6ZDK6-TCFO9apPs6tXadj2sCTrbzDdpPF3fROmq_ozxBym2_oH-BBhgDi9rJ5502d4dbceVb9Ov_xcfW0uvp-dr04uGsuEmhqpfCuoca7rlFKeI2ClDikYcCE7aJ00BITzknOiOkEtbi2yiiHCHZLc0aPqfKfrktno7RgGM97qZIL-F0jjWi9-bA8a2uIVBOOYSaYApPAAnTdEOGuIl0Xr405rO3cDOFtsjKZ_JPo4E8OVXqc_GiPatoosCu_uFMZ0PUOe9BCyhb43EdKcNZGMCYnLLOjb_9BNmsdY3mqhKCr_gXmhjnfU2hQHIfpULrZlOBiCTbG8d4mftFwhhhFfZN_vDtgx5TyC35ePkV56Q-97o7BvHvrdk_fNQP8C83eyxQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2843050915</pqid></control><display><type>article</type><title>Partition-Based Point Cloud Completion Network with Density Refinement</title><source>Open Access: PubMed Central</source><source>Publicly Available Content Database</source><source>DOAJ Directory of Open Access Journals</source><creator>Li, Jianxin ; Si, Guannan ; Liang, Xinyu ; An, Zhaoliang ; Tian, Pengxin ; Zhou, Fengyu</creator><creatorcontrib>Li, Jianxin ; Si, Guannan ; Liang, Xinyu ; An, Zhaoliang ; Tian, Pengxin ; Zhou, Fengyu</creatorcontrib><description>In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud's 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds.</description><identifier>ISSN: 1099-4300</identifier><identifier>EISSN: 1099-4300</identifier><identifier>DOI: 10.3390/e25071018</identifier><identifier>PMID: 37509965</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Algorithms ; Analysis ; Cloud computing ; Computer vision ; convolutional neural networks ; Density ; Euclidean space ; geometric density ; gridding ; Methods ; Neural networks ; point cloud completion ; radar ; Three dimensional models</subject><ispartof>Entropy (Basel, Switzerland), 2023-07, Vol.25 (7), p.1018</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2023 by the authors. 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c469t-89f763addbb999f50e4099864e568be7d8a2e6df85529b63c17c0c94025d085d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2843050915/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2843050915?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,2096,25731,27901,27902,36989,36990,44566,53766,53768,74869</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37509965$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Jianxin</creatorcontrib><creatorcontrib>Si, Guannan</creatorcontrib><creatorcontrib>Liang, Xinyu</creatorcontrib><creatorcontrib>An, Zhaoliang</creatorcontrib><creatorcontrib>Tian, Pengxin</creatorcontrib><creatorcontrib>Zhou, Fengyu</creatorcontrib><title>Partition-Based Point Cloud Completion Network with Density Refinement</title><title>Entropy (Basel, Switzerland)</title><addtitle>Entropy (Basel)</addtitle><description>In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud's 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds.</description><subject>Algorithms</subject><subject>Analysis</subject><subject>Cloud computing</subject><subject>Computer vision</subject><subject>convolutional neural networks</subject><subject>Density</subject><subject>Euclidean space</subject><subject>geometric density</subject><subject>gridding</subject><subject>Methods</subject><subject>Neural networks</subject><subject>point cloud completion</subject><subject>radar</subject><subject>Three dimensional models</subject><issn>1099-4300</issn><issn>1099-4300</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1v1DAQhiMEoqVw4A-gSFzgkOLv2CdUFloqVVAhOFuOPd56SextnFD13-Ow7apFtmRr5vHree2pqtcYHVOq0AcgHLUYYfmkOsRIqYZRhJ4-2B9UL3LeIEQoweJ5dUBbXlKCH1anl2acwhRSbD6ZDK6-TCFO9apPs6tXadj2sCTrbzDdpPF3fROmq_ozxBym2_oH-BBhgDi9rJ5502d4dbceVb9Ov_xcfW0uvp-dr04uGsuEmhqpfCuoca7rlFKeI2ClDikYcCE7aJ00BITzknOiOkEtbi2yiiHCHZLc0aPqfKfrktno7RgGM97qZIL-F0jjWi9-bA8a2uIVBOOYSaYApPAAnTdEOGuIl0Xr405rO3cDOFtsjKZ_JPo4E8OVXqc_GiPatoosCu_uFMZ0PUOe9BCyhb43EdKcNZGMCYnLLOjb_9BNmsdY3mqhKCr_gXmhjnfU2hQHIfpULrZlOBiCTbG8d4mftFwhhhFfZN_vDtgx5TyC35ePkV56Q-97o7BvHvrdk_fNQP8C83eyxQ</recordid><startdate>20230702</startdate><enddate>20230702</enddate><creator>Li, Jianxin</creator><creator>Si, Guannan</creator><creator>Liang, Xinyu</creator><creator>An, Zhaoliang</creator><creator>Tian, Pengxin</creator><creator>Zhou, Fengyu</creator><general>MDPI AG</general><general>MDPI</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>HCIFZ</scope><scope>KR7</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20230702</creationdate><title>Partition-Based Point Cloud Completion Network with Density Refinement</title><author>Li, Jianxin ; Si, Guannan ; Liang, Xinyu ; An, Zhaoliang ; Tian, Pengxin ; Zhou, Fengyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c469t-89f763addbb999f50e4099864e568be7d8a2e6df85529b63c17c0c94025d085d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Analysis</topic><topic>Cloud computing</topic><topic>Computer vision</topic><topic>convolutional neural networks</topic><topic>Density</topic><topic>Euclidean space</topic><topic>geometric density</topic><topic>gridding</topic><topic>Methods</topic><topic>Neural networks</topic><topic>point cloud completion</topic><topic>radar</topic><topic>Three dimensional models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Jianxin</creatorcontrib><creatorcontrib>Si, Guannan</creatorcontrib><creatorcontrib>Liang, Xinyu</creatorcontrib><creatorcontrib>An, Zhaoliang</creatorcontrib><creatorcontrib>Tian, Pengxin</creatorcontrib><creatorcontrib>Zhou, Fengyu</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>SciTech Premium Collection</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Entropy (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Jianxin</au><au>Si, Guannan</au><au>Liang, Xinyu</au><au>An, Zhaoliang</au><au>Tian, Pengxin</au><au>Zhou, Fengyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Partition-Based Point Cloud Completion Network with Density Refinement</atitle><jtitle>Entropy (Basel, Switzerland)</jtitle><addtitle>Entropy (Basel)</addtitle><date>2023-07-02</date><risdate>2023</risdate><volume>25</volume><issue>7</issue><spage>1018</spage><pages>1018-</pages><issn>1099-4300</issn><eissn>1099-4300</eissn><abstract>In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud's 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>37509965</pmid><doi>10.3390/e25071018</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1099-4300
ispartof Entropy (Basel, Switzerland), 2023-07, Vol.25 (7), p.1018
issn 1099-4300
1099-4300
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_e7216e64514849ee86feebfa26dca2f8
source Open Access: PubMed Central; Publicly Available Content Database; DOAJ Directory of Open Access Journals
subjects Algorithms
Analysis
Cloud computing
Computer vision
convolutional neural networks
Density
Euclidean space
geometric density
gridding
Methods
Neural networks
point cloud completion
radar
Three dimensional models
title Partition-Based Point Cloud Completion Network with Density Refinement
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T22%3A08%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Partition-Based%20Point%20Cloud%20Completion%20Network%20with%20Density%20Refinement&rft.jtitle=Entropy%20(Basel,%20Switzerland)&rft.au=Li,%20Jianxin&rft.date=2023-07-02&rft.volume=25&rft.issue=7&rft.spage=1018&rft.pages=1018-&rft.issn=1099-4300&rft.eissn=1099-4300&rft_id=info:doi/10.3390/e25071018&rft_dat=%3Cgale_doaj_%3EA759041051%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c469t-89f763addbb999f50e4099864e568be7d8a2e6df85529b63c17c0c94025d085d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2843050915&rft_id=info:pmid/37509965&rft_galeid=A759041051&rfr_iscdi=true