Loading…
Position-Guided Point Cloud Panoptic Segmentation Transformer
DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair resu...
Saved in:
Published in: | arXiv.org 2023-03 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Xiao, Zeqi Zhang, Wenwei Wang, Tai Chen Change Loy Lin, Dahua Pang, Jiangmiao |
description | DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former . |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2790190899</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2790190899</sourcerecordid><originalsourceid>FETCH-proquest_journals_27901908993</originalsourceid><addsrcrecordid>eNqNiksKgzAUAEOhUGm9Q6DrQExqNYuupJ-lUPcSNJaIvmfzuX8t9ABdzcDMhiRCyoyVJyF2JPV-5JyLcyHyXCbkUqO3wSKwe7S96WmNFgKtJoyra8Al2I4-zWs2EPR3pI3T4Ad0s3EHsh305E36454cb9emerDF4TsaH9oRo4M1taJQPFO8VEr-d30Ana04Yg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2790190899</pqid></control><display><type>article</type><title>Position-Guided Point Cloud Panoptic Segmentation Transformer</title><source>Publicly Available Content Database</source><creator>Xiao, Zeqi ; Zhang, Wenwei ; Wang, Tai ; Chen Change Loy ; Lin, Dahua ; Pang, Jiangmiao</creator><creatorcontrib>Xiao, Zeqi ; Zhang, Wenwei ; Wang, Tai ; Chen Change Loy ; Lin, Dahua ; Pang, Jiangmiao</creatorcontrib><description>DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former .</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Design parameters ; Embedding ; Image segmentation ; Queries ; Source code ; Transformers ; Visual perception</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2790190899?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Xiao, Zeqi</creatorcontrib><creatorcontrib>Zhang, Wenwei</creatorcontrib><creatorcontrib>Wang, Tai</creatorcontrib><creatorcontrib>Chen Change Loy</creatorcontrib><creatorcontrib>Lin, Dahua</creatorcontrib><creatorcontrib>Pang, Jiangmiao</creatorcontrib><title>Position-Guided Point Cloud Panoptic Segmentation Transformer</title><title>arXiv.org</title><description>DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former .</description><subject>Design parameters</subject><subject>Embedding</subject><subject>Image segmentation</subject><subject>Queries</subject><subject>Source code</subject><subject>Transformers</subject><subject>Visual perception</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNiksKgzAUAEOhUGm9Q6DrQExqNYuupJ-lUPcSNJaIvmfzuX8t9ABdzcDMhiRCyoyVJyF2JPV-5JyLcyHyXCbkUqO3wSKwe7S96WmNFgKtJoyra8Al2I4-zWs2EPR3pI3T4Ad0s3EHsh305E36454cb9emerDF4TsaH9oRo4M1taJQPFO8VEr-d30Ana04Yg</recordid><startdate>20230323</startdate><enddate>20230323</enddate><creator>Xiao, Zeqi</creator><creator>Zhang, Wenwei</creator><creator>Wang, Tai</creator><creator>Chen Change Loy</creator><creator>Lin, Dahua</creator><creator>Pang, Jiangmiao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230323</creationdate><title>Position-Guided Point Cloud Panoptic Segmentation Transformer</title><author>Xiao, Zeqi ; Zhang, Wenwei ; Wang, Tai ; Chen Change Loy ; Lin, Dahua ; Pang, Jiangmiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27901908993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Design parameters</topic><topic>Embedding</topic><topic>Image segmentation</topic><topic>Queries</topic><topic>Source code</topic><topic>Transformers</topic><topic>Visual perception</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiao, Zeqi</creatorcontrib><creatorcontrib>Zhang, Wenwei</creatorcontrib><creatorcontrib>Wang, Tai</creatorcontrib><creatorcontrib>Chen Change Loy</creatorcontrib><creatorcontrib>Lin, Dahua</creatorcontrib><creatorcontrib>Pang, Jiangmiao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiao, Zeqi</au><au>Zhang, Wenwei</au><au>Wang, Tai</au><au>Chen Change Loy</au><au>Lin, Dahua</au><au>Pang, Jiangmiao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Position-Guided Point Cloud Panoptic Segmentation Transformer</atitle><jtitle>arXiv.org</jtitle><date>2023-03-23</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former .</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2790190899 |
source | Publicly Available Content Database |
subjects | Design parameters Embedding Image segmentation Queries Source code Transformers Visual perception |
title | Position-Guided Point Cloud Panoptic Segmentation Transformer |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T08%3A30%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Position-Guided%20Point%20Cloud%20Panoptic%20Segmentation%20Transformer&rft.jtitle=arXiv.org&rft.au=Xiao,%20Zeqi&rft.date=2023-03-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2790190899%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27901908993%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2790190899&rft_id=info:pmid/&rfr_iscdi=true |