Loading…
ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION
The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manne...
Saved in:
Published in: | ISPRS annals of the photogrammetry, remote sensing and spatial information sciences remote sensing and spatial information sciences, 2020-08, Vol.V-2-2020, p.509-516 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c3360-cdec9678eb85000c9e786a398a98335f66ceddde25e83af40730769fd603a8ef3 |
---|---|
cites | |
container_end_page | 516 |
container_issue | |
container_start_page | 509 |
container_title | ISPRS annals of the photogrammetry, remote sensing and spatial information sciences |
container_volume | V-2-2020 |
creator | Laupheimer, D. Shams Eddin, M. H. Haala, N. |
description | The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions. |
doi_str_mv | 10.5194/isprs-annals-V-2-2020-509-2020 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_93629f3eb8514aaeacf5473cefd8b43e</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_93629f3eb8514aaeacf5473cefd8b43e</doaj_id><sourcerecordid>2429598641</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3360-cdec9678eb85000c9e786a398a98335f66ceddde25e83af40730769fd603a8ef3</originalsourceid><addsrcrecordid>eNpNkV9LwzAUxYsoONTvEBB8i6ZJkyYvQmk7V-gfWTPxLWRpIhvTznQ--O3tOhGf7rmXw7mX-wuCuxDd01BED5th7weoPz70boAvEEOMMIIUiUmcBTM8uqBAFJ3_05fBzTBsEUJhTIUQeBasmxrIRQ6Stm3SIpHF2DdzUBZZsgTPTVFLkJbNKmtBUmdA5q9ytcwzUOXtIm_BvFmCalXKAlZNlpSgzauklkU6iqcqr-WUdx1cuPFKe_Nbr4LVPJfpApbNU5EmJTSEMARNZ41gMbdrTscLjbAxZ5oIrgUnhDrGjO26zmJqOdEuQjFBMROuY4hobh25CopTbtfrrdr7zbv236rXGzUNev-mtD9szM4qQRgWjhxXhZHWVhtHo5gY6zq-jogds25PWXvff37Z4aC2_Zc_PlvhCAsqOIvC0fV4chnfD4O37m9riNQRk5owqRMm9aKwOtJRI6ZJkB-1HYHu</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2429598641</pqid></control><display><type>article</type><title>ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION</title><source>Publicly Available Content Database</source><creator>Laupheimer, D. ; Shams Eddin, M. H. ; Haala, N.</creator><creatorcontrib>Laupheimer, D. ; Shams Eddin, M. H. ; Haala, N.</creatorcontrib><description>The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions.</description><identifier>ISSN: 2194-9050</identifier><identifier>ISSN: 2194-9042</identifier><identifier>EISSN: 2194-9050</identifier><identifier>DOI: 10.5194/isprs-annals-V-2-2020-509-2020</identifier><language>eng</language><publisher>Gottingen: Copernicus GmbH</publisher><subject>Cloud computing ; Data acquisition ; Image annotation ; Image segmentation ; Information transfer ; Lidar ; Mesh generation ; Representations ; Semantic segmentation ; Semantics ; Three dimensional models ; Triangles</subject><ispartof>ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, 2020-08, Vol.V-2-2020, p.509-516</ispartof><rights>2020. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3360-cdec9678eb85000c9e786a398a98335f66ceddde25e83af40730769fd603a8ef3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2429598641?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Laupheimer, D.</creatorcontrib><creatorcontrib>Shams Eddin, M. H.</creatorcontrib><creatorcontrib>Haala, N.</creatorcontrib><title>ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION</title><title>ISPRS annals of the photogrammetry, remote sensing and spatial information sciences</title><description>The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions.</description><subject>Cloud computing</subject><subject>Data acquisition</subject><subject>Image annotation</subject><subject>Image segmentation</subject><subject>Information transfer</subject><subject>Lidar</subject><subject>Mesh generation</subject><subject>Representations</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Three dimensional models</subject><subject>Triangles</subject><issn>2194-9050</issn><issn>2194-9042</issn><issn>2194-9050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNkV9LwzAUxYsoONTvEBB8i6ZJkyYvQmk7V-gfWTPxLWRpIhvTznQ--O3tOhGf7rmXw7mX-wuCuxDd01BED5th7weoPz70boAvEEOMMIIUiUmcBTM8uqBAFJ3_05fBzTBsEUJhTIUQeBasmxrIRQ6Stm3SIpHF2DdzUBZZsgTPTVFLkJbNKmtBUmdA5q9ytcwzUOXtIm_BvFmCalXKAlZNlpSgzauklkU6iqcqr-WUdx1cuPFKe_Nbr4LVPJfpApbNU5EmJTSEMARNZ41gMbdrTscLjbAxZ5oIrgUnhDrGjO26zmJqOdEuQjFBMROuY4hobh25CopTbtfrrdr7zbv236rXGzUNev-mtD9szM4qQRgWjhxXhZHWVhtHo5gY6zq-jogds25PWXvff37Z4aC2_Zc_PlvhCAsqOIvC0fV4chnfD4O37m9riNQRk5owqRMm9aKwOtJRI6ZJkB-1HYHu</recordid><startdate>20200803</startdate><enddate>20200803</enddate><creator>Laupheimer, D.</creator><creator>Shams Eddin, M. H.</creator><creator>Haala, N.</creator><general>Copernicus GmbH</general><general>Copernicus Publications</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PCBAR</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>DOA</scope></search><sort><creationdate>20200803</creationdate><title>ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION</title><author>Laupheimer, D. ; Shams Eddin, M. H. ; Haala, N.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3360-cdec9678eb85000c9e786a398a98335f66ceddde25e83af40730769fd603a8ef3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Cloud computing</topic><topic>Data acquisition</topic><topic>Image annotation</topic><topic>Image segmentation</topic><topic>Information transfer</topic><topic>Lidar</topic><topic>Mesh generation</topic><topic>Representations</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Three dimensional models</topic><topic>Triangles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Laupheimer, D.</creatorcontrib><creatorcontrib>Shams Eddin, M. H.</creatorcontrib><creatorcontrib>Haala, N.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Earth, Atmospheric & Aquatic Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Earth, Atmospheric & Aquatic Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>ISPRS annals of the photogrammetry, remote sensing and spatial information sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Laupheimer, D.</au><au>Shams Eddin, M. H.</au><au>Haala, N.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION</atitle><jtitle>ISPRS annals of the photogrammetry, remote sensing and spatial information sciences</jtitle><date>2020-08-03</date><risdate>2020</risdate><volume>V-2-2020</volume><spage>509</spage><epage>516</epage><pages>509-516</pages><issn>2194-9050</issn><issn>2194-9042</issn><eissn>2194-9050</eissn><abstract>The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions.</abstract><cop>Gottingen</cop><pub>Copernicus GmbH</pub><doi>10.5194/isprs-annals-V-2-2020-509-2020</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2194-9050 |
ispartof | ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, 2020-08, Vol.V-2-2020, p.509-516 |
issn | 2194-9050 2194-9042 2194-9050 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_93629f3eb8514aaeacf5473cefd8b43e |
source | Publicly Available Content Database |
subjects | Cloud computing Data acquisition Image annotation Image segmentation Information transfer Lidar Mesh generation Representations Semantic segmentation Semantics Three dimensional models Triangles |
title | ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T22%3A46%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ON%20THE%20ASSOCIATION%20OF%20LIDAR%20POINT%20CLOUDS%20AND%20TEXTURED%20MESHES%20FOR%20MULTI-MODAL%20SEMANTIC%20SEGMENTATION&rft.jtitle=ISPRS%20annals%20of%20the%20photogrammetry,%20remote%20sensing%20and%20spatial%20information%20sciences&rft.au=Laupheimer,%20D.&rft.date=2020-08-03&rft.volume=V-2-2020&rft.spage=509&rft.epage=516&rft.pages=509-516&rft.issn=2194-9050&rft.eissn=2194-9050&rft_id=info:doi/10.5194/isprs-annals-V-2-2020-509-2020&rft_dat=%3Cproquest_doaj_%3E2429598641%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3360-cdec9678eb85000c9e786a398a98335f66ceddde25e83af40730769fd603a8ef3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2429598641&rft_id=info:pmid/&rfr_iscdi=true |