Loading…

MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior

A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtai...

Full description

Saved in:
Bibliographic Details
Published in:Remote sensing (Basel, Switzerland) Switzerland), 2023-11, Vol.15 (22), p.5324
Main Authors: Zhang, Guangyun, Zhang, Rongting
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c359t-cf3f94f8f7593079b3068e8447d01d3e85d77e3724d23c9030be8f571407ee993
container_end_page
container_issue 22
container_start_page 5324
container_title Remote sensing (Basel, Switzerland)
container_volume 15
creator Zhang, Guangyun
Zhang, Rongting
description A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.
doi_str_mv 10.3390/rs15225324
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_c98a4a77e28c41ddae4b9c5cba8167bc</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A774325726</galeid><doaj_id>oai_doaj_org_article_c98a4a77e28c41ddae4b9c5cba8167bc</doaj_id><sourcerecordid>A774325726</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-cf3f94f8f7593079b3068e8447d01d3e85d77e3724d23c9030be8f571407ee993</originalsourceid><addsrcrecordid>eNpNkU1PwzAMhisEEhNw4RdU4obUkcZpk3Cbxqf4lAbnKE2dkbE1IylC_HsyioD4EOv160e2nGWHJRkDSHISYllRWgFlW9mIEk4LRiXd_pfvZgcxLkh6AKUkbJTd3GF8uce-mD2e5pN8hivd9c7kz6HRXQ5n-aae5PkKu173znd5cn_48Jp_uD5V1jpEzB-D82E_27F6GfHg59_Lni_On6ZXxe3D5fV0clsYqGRfGAtWMissryQQLhsgtUDBGG9J2QKKquUcgVPWUjCSAGlQ2IqXjHBEKWEvux64rdcLtQ5upcOn8tqpb8GHudIhLbFEZaTQTCccFYaVbauRNdJUptGirHljEutoYK2Df3vH2KuFfw9dGl9RIQEYqwVNrvHgmusEdZ31fdAmRYsrZ3yH1iV9wjkDWnFap4bjocEEH2NA-ztmSdTmWOrvWPAFGN6Dgw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2893344682</pqid></control><display><type>article</type><title>MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior</title><source>ProQuest - Publicly Available Content Database</source><creator>Zhang, Guangyun ; Zhang, Rongting</creator><creatorcontrib>Zhang, Guangyun ; Zhang, Rongting</creatorcontrib><description>A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.</description><identifier>ISSN: 2072-4292</identifier><identifier>EISSN: 2072-4292</identifier><identifier>DOI: 10.3390/rs15225324</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>3D real scene ; Ablation ; convolutional neural network ; Deep learning ; Feature extraction ; low intrinsic dimension ; Methods ; Optimization algorithms ; Photogrammetry ; Remote sensing ; Robustness ; Semantic segmentation ; Semantics ; sparse prior ; Texture ; urban 3D mesh ; Urban areas</subject><ispartof>Remote sensing (Basel, Switzerland), 2023-11, Vol.15 (22), p.5324</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c359t-cf3f94f8f7593079b3068e8447d01d3e85d77e3724d23c9030be8f571407ee993</cites><orcidid>0000-0003-2322-4464 ; 0000-0003-2183-9553</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2893344682/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2893344682?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Zhang, Guangyun</creatorcontrib><creatorcontrib>Zhang, Rongting</creatorcontrib><title>MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior</title><title>Remote sensing (Basel, Switzerland)</title><description>A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.</description><subject>3D real scene</subject><subject>Ablation</subject><subject>convolutional neural network</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>low intrinsic dimension</subject><subject>Methods</subject><subject>Optimization algorithms</subject><subject>Photogrammetry</subject><subject>Remote sensing</subject><subject>Robustness</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>sparse prior</subject><subject>Texture</subject><subject>urban 3D mesh</subject><subject>Urban areas</subject><issn>2072-4292</issn><issn>2072-4292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNkU1PwzAMhisEEhNw4RdU4obUkcZpk3Cbxqf4lAbnKE2dkbE1IylC_HsyioD4EOv160e2nGWHJRkDSHISYllRWgFlW9mIEk4LRiXd_pfvZgcxLkh6AKUkbJTd3GF8uce-mD2e5pN8hivd9c7kz6HRXQ5n-aae5PkKu173znd5cn_48Jp_uD5V1jpEzB-D82E_27F6GfHg59_Lni_On6ZXxe3D5fV0clsYqGRfGAtWMissryQQLhsgtUDBGG9J2QKKquUcgVPWUjCSAGlQ2IqXjHBEKWEvux64rdcLtQ5upcOn8tqpb8GHudIhLbFEZaTQTCccFYaVbauRNdJUptGirHljEutoYK2Df3vH2KuFfw9dGl9RIQEYqwVNrvHgmusEdZ31fdAmRYsrZ3yH1iV9wjkDWnFap4bjocEEH2NA-ztmSdTmWOrvWPAFGN6Dgw</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Zhang, Guangyun</creator><creator>Zhang, Rongting</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SN</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>H8G</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PCBAR</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-2322-4464</orcidid><orcidid>https://orcid.org/0000-0003-2183-9553</orcidid></search><sort><creationdate>20231101</creationdate><title>MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior</title><author>Zhang, Guangyun ; Zhang, Rongting</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-cf3f94f8f7593079b3068e8447d01d3e85d77e3724d23c9030be8f571407ee993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3D real scene</topic><topic>Ablation</topic><topic>convolutional neural network</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>low intrinsic dimension</topic><topic>Methods</topic><topic>Optimization algorithms</topic><topic>Photogrammetry</topic><topic>Remote sensing</topic><topic>Robustness</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>sparse prior</topic><topic>Texture</topic><topic>urban 3D mesh</topic><topic>Urban areas</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Guangyun</creatorcontrib><creatorcontrib>Zhang, Rongting</creatorcontrib><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Ecology Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Earth, Atmospheric &amp; Aquatic Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Earth, Atmospheric &amp; Aquatic Science Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>Directory of Open Access Journals (Open Access)</collection><jtitle>Remote sensing (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Guangyun</au><au>Zhang, Rongting</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior</atitle><jtitle>Remote sensing (Basel, Switzerland)</jtitle><date>2023-11-01</date><risdate>2023</risdate><volume>15</volume><issue>22</issue><spage>5324</spage><pages>5324-</pages><issn>2072-4292</issn><eissn>2072-4292</eissn><abstract>A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/rs15225324</doi><orcidid>https://orcid.org/0000-0003-2322-4464</orcidid><orcidid>https://orcid.org/0000-0003-2183-9553</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2072-4292
ispartof Remote sensing (Basel, Switzerland), 2023-11, Vol.15 (22), p.5324
issn 2072-4292
2072-4292
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_c98a4a77e28c41ddae4b9c5cba8167bc
source ProQuest - Publicly Available Content Database
subjects 3D real scene
Ablation
convolutional neural network
Deep learning
Feature extraction
low intrinsic dimension
Methods
Optimization algorithms
Photogrammetry
Remote sensing
Robustness
Semantic segmentation
Semantics
sparse prior
Texture
urban 3D mesh
Urban areas
title MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T10%3A25%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MeshNet-SP:%20A%20Semantic%20Urban%203D%20Mesh%20Segmentation%20Network%20with%20Sparse%20Prior&rft.jtitle=Remote%20sensing%20(Basel,%20Switzerland)&rft.au=Zhang,%20Guangyun&rft.date=2023-11-01&rft.volume=15&rft.issue=22&rft.spage=5324&rft.pages=5324-&rft.issn=2072-4292&rft.eissn=2072-4292&rft_id=info:doi/10.3390/rs15225324&rft_dat=%3Cgale_doaj_%3EA774325726%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c359t-cf3f94f8f7593079b3068e8447d01d3e85d77e3724d23c9030be8f571407ee993%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2893344682&rft_id=info:pmid/&rft_galeid=A774325726&rfr_iscdi=true