Loading…
R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method
Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop o...
Saved in:
Published in: | IEEE transactions on image processing 2022, Vol.31, p.2710-2725 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913 |
---|---|
cites | cdi_FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913 |
container_end_page | 2725 |
container_issue | |
container_start_page | 2710 |
container_title | IEEE transactions on image processing |
container_volume | 31 |
creator | Kadam, Pranav Zhang, Min Liu, Shan Kuo, C. -C. Jay |
description | Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop obtains local-to-global hierarchical features by point downsampling, neighborhood expansion, attribute construction and dimensionality reduction steps. Thus, point correspondences are built in hierarchical feature space using the nearest neighbor rule. Afterwards, a subset of salient points with good correspondence is selected to estimate the 3D transformation. The use of the LRF allows for invariance of the hierarchical features of points with respect to rotation and translation, thus making R-PointHop more robust at building point correspondence, even when the rotation angles are large. Experiments are conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which demonstrate the effectiveness of R-PointHop for 3D point cloud registration. R-PointHop's model size and training time are an order of magnitude smaller than those of deep learning methods, and its registration errors are smaller, making it a green and accurate solution. Our codes are available on GitHub ( https://github.com/pranavkdm/R-PointHop ). |
doi_str_mv | 10.1109/TIP.2022.3160609 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9741387</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9741387</ieee_id><sourcerecordid>2644009862</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913</originalsourceid><addsrcrecordid>eNpdkN9LIzEQx4N4nD_u3gVBAr744PYyyexu4lspWoUeFdHnZZvM6kq7qcnuwf33prb64NMMzOc7M3wYOwExAhDmz-Pd_UgKKUcKClEIs8cOwSBkQqDcT73Iy6wENAfsKMZXIQBzKH6yA5UriYhwyOYP2b1vu_7Wr6_4mE8DUXfJx9YOoe7pkted409dHNYU_rWRHP-g-WTpB8cf6LmNfQJb3_G_1L9494v9aOplpN-7esyebq4fJ7fZbD69m4xnmVVY9hnKUqNEQ6iUaKSVgggbW-eFpRq0I9AFgi611Vg6bSQCKNvU4BYLcgbUMbvY7l0H_zZQ7KtVGy0tl3VHfoiVLBCFMLqQCT3_hr76IXTpuw2VS8zTjUSJLWWDjzFQU61Du6rD_wpEtZFdJdnVRna1k50iZ7vFw2JF7ivwaTcBp1ugJaKvsSnTSJfqHfMNgFY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2645245847</pqid></control><display><type>article</type><title>R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Kadam, Pranav ; Zhang, Min ; Liu, Shan ; Kuo, C. -C. Jay</creator><creatorcontrib>Kadam, Pranav ; Zhang, Min ; Liu, Shan ; Kuo, C. -C. Jay</creatorcontrib><description>Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop obtains local-to-global hierarchical features by point downsampling, neighborhood expansion, attribute construction and dimensionality reduction steps. Thus, point correspondences are built in hierarchical feature space using the nearest neighbor rule. Afterwards, a subset of salient points with good correspondence is selected to estimate the 3D transformation. The use of the LRF allows for invariance of the hierarchical features of points with respect to rotation and translation, thus making R-PointHop more robust at building point correspondence, even when the rotation angles are large. Experiments are conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which demonstrate the effectiveness of R-PointHop for 3D point cloud registration. R-PointHop's model size and training time are an order of magnitude smaller than those of deep learning methods, and its registration errors are smaller, making it a green and accurate solution. Our codes are available on GitHub ( https://github.com/pranavkdm/R-PointHop ).</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2022.3160609</identifier><identifier>PMID: 35324441</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>3D feature descriptor ; Deep learning ; Feature extraction ; local reference frame (LRF) ; Machine learning ; Point cloud compression ; Point cloud registration ; Principal component analysis ; Registration ; Rotation ; rotation invariance ; Task analysis ; Three dimensional models ; Three-dimensional displays ; Transforms</subject><ispartof>IEEE transactions on image processing, 2022, Vol.31, p.2710-2725</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913</citedby><cites>FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913</cites><orcidid>0000-0001-7645-3506 ; 0000-0001-9474-5035 ; 0000-0002-6940-7146 ; 0000-0002-1442-1207</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9741387$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4021,27921,27922,27923,54794</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35324441$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Kadam, Pranav</creatorcontrib><creatorcontrib>Zhang, Min</creatorcontrib><creatorcontrib>Liu, Shan</creatorcontrib><creatorcontrib>Kuo, C. -C. Jay</creatorcontrib><title>R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop obtains local-to-global hierarchical features by point downsampling, neighborhood expansion, attribute construction and dimensionality reduction steps. Thus, point correspondences are built in hierarchical feature space using the nearest neighbor rule. Afterwards, a subset of salient points with good correspondence is selected to estimate the 3D transformation. The use of the LRF allows for invariance of the hierarchical features of points with respect to rotation and translation, thus making R-PointHop more robust at building point correspondence, even when the rotation angles are large. Experiments are conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which demonstrate the effectiveness of R-PointHop for 3D point cloud registration. R-PointHop's model size and training time are an order of magnitude smaller than those of deep learning methods, and its registration errors are smaller, making it a green and accurate solution. Our codes are available on GitHub ( https://github.com/pranavkdm/R-PointHop ).</description><subject>3D feature descriptor</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>local reference frame (LRF)</subject><subject>Machine learning</subject><subject>Point cloud compression</subject><subject>Point cloud registration</subject><subject>Principal component analysis</subject><subject>Registration</subject><subject>Rotation</subject><subject>rotation invariance</subject><subject>Task analysis</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><subject>Transforms</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpdkN9LIzEQx4N4nD_u3gVBAr744PYyyexu4lspWoUeFdHnZZvM6kq7qcnuwf33prb64NMMzOc7M3wYOwExAhDmz-Pd_UgKKUcKClEIs8cOwSBkQqDcT73Iy6wENAfsKMZXIQBzKH6yA5UriYhwyOYP2b1vu_7Wr6_4mE8DUXfJx9YOoe7pkted409dHNYU_rWRHP-g-WTpB8cf6LmNfQJb3_G_1L9494v9aOplpN-7esyebq4fJ7fZbD69m4xnmVVY9hnKUqNEQ6iUaKSVgggbW-eFpRq0I9AFgi611Vg6bSQCKNvU4BYLcgbUMbvY7l0H_zZQ7KtVGy0tl3VHfoiVLBCFMLqQCT3_hr76IXTpuw2VS8zTjUSJLWWDjzFQU61Du6rD_wpEtZFdJdnVRna1k50iZ7vFw2JF7ivwaTcBp1ugJaKvsSnTSJfqHfMNgFY</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Kadam, Pranav</creator><creator>Zhang, Min</creator><creator>Liu, Shan</creator><creator>Kuo, C. -C. Jay</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7645-3506</orcidid><orcidid>https://orcid.org/0000-0001-9474-5035</orcidid><orcidid>https://orcid.org/0000-0002-6940-7146</orcidid><orcidid>https://orcid.org/0000-0002-1442-1207</orcidid></search><sort><creationdate>2022</creationdate><title>R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method</title><author>Kadam, Pranav ; Zhang, Min ; Liu, Shan ; Kuo, C. -C. Jay</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>3D feature descriptor</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>local reference frame (LRF)</topic><topic>Machine learning</topic><topic>Point cloud compression</topic><topic>Point cloud registration</topic><topic>Principal component analysis</topic><topic>Registration</topic><topic>Rotation</topic><topic>rotation invariance</topic><topic>Task analysis</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><topic>Transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kadam, Pranav</creatorcontrib><creatorcontrib>Zhang, Min</creatorcontrib><creatorcontrib>Liu, Shan</creatorcontrib><creatorcontrib>Kuo, C. -C. Jay</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kadam, Pranav</au><au>Zhang, Min</au><au>Liu, Shan</au><au>Kuo, C. -C. Jay</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2022</date><risdate>2022</risdate><volume>31</volume><spage>2710</spage><epage>2725</epage><pages>2710-2725</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop obtains local-to-global hierarchical features by point downsampling, neighborhood expansion, attribute construction and dimensionality reduction steps. Thus, point correspondences are built in hierarchical feature space using the nearest neighbor rule. Afterwards, a subset of salient points with good correspondence is selected to estimate the 3D transformation. The use of the LRF allows for invariance of the hierarchical features of points with respect to rotation and translation, thus making R-PointHop more robust at building point correspondence, even when the rotation angles are large. Experiments are conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which demonstrate the effectiveness of R-PointHop for 3D point cloud registration. R-PointHop's model size and training time are an order of magnitude smaller than those of deep learning methods, and its registration errors are smaller, making it a green and accurate solution. Our codes are available on GitHub ( https://github.com/pranavkdm/R-PointHop ).</abstract><cop>United States</cop><pub>IEEE</pub><pmid>35324441</pmid><doi>10.1109/TIP.2022.3160609</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-7645-3506</orcidid><orcidid>https://orcid.org/0000-0001-9474-5035</orcidid><orcidid>https://orcid.org/0000-0002-6940-7146</orcidid><orcidid>https://orcid.org/0000-0002-1442-1207</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2022, Vol.31, p.2710-2725 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_ieee_primary_9741387 |
source | IEEE Electronic Library (IEL) Journals |
subjects | 3D feature descriptor Deep learning Feature extraction local reference frame (LRF) Machine learning Point cloud compression Point cloud registration Principal component analysis Registration Rotation rotation invariance Task analysis Three dimensional models Three-dimensional displays Transforms |
title | R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T15%3A09%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=R-PointHop:%20A%20Green,%20Accurate,%20and%20Unsupervised%20Point%20Cloud%20Registration%20Method&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Kadam,%20Pranav&rft.date=2022&rft.volume=31&rft.spage=2710&rft.epage=2725&rft.pages=2710-2725&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2022.3160609&rft_dat=%3Cproquest_ieee_%3E2644009862%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c347t-42784249e4330f2c20ee4fca56cea18de18641878c847d8924113cfa1dbbed913%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2645245847&rft_id=info:pmid/35324441&rft_ieee_id=9741387&rfr_iscdi=true |