Loading…
Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks
Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNN...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2020-07, Vol.20 (13), p.3718 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3 |
---|---|
cites | cdi_FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3 |
container_end_page | |
container_issue | 13 |
container_start_page | 3718 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 20 |
creator | Nguyen, Hieu Wang, Yuzeng Wang, Zhaoyang |
description | Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications. |
doi_str_mv | 10.3390/s20133718 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_7e3186a350994e499f33a46595f5a50d</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_7e3186a350994e499f33a46595f5a50d</doaj_id><sourcerecordid>2421460531</sourcerecordid><originalsourceid>FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3</originalsourceid><addsrcrecordid>eNpdkk1v1DAQhi0EoqVw4B9Y4gKHUNszTuwLEtryUWlFpS49W44z2c2SjRc7KeLfk-5WVctpbM-jR-PRy9hbKT4CWHGelZAAlTTP2KlEhYVRSjx_dD5hr3LeCqEAwLxkJ6BK0BLxlPlVN6x7KlabOHK44KuN3xO_phCHPKYpjF0c-E2eIb463KdEDV92683I_dDwC6I9X8ThNvbTHet7_oOmdCjjn5h-5dfsRev7TG_u6xm7-frl5-J7sbz6drn4vCwCYjkWklqqLDZGoQYrQyltQKihVaElrFUQRtgmeAO1175uQ9C6tLKqS9nUaD2cscujt4l-6_ap2_n010XfucNDTGvn09iFnlxFIE3pQQtrkdDaFsBjqa1utdeimV2fjq79VO-oCTSM85eeSJ92hm7j1vHWVVAhGJwF7-8FKf6eKI9u1-VAfe8HilN2CpXEUmiQM_ruP3QbpzQv8kAJq40BPVMfjlRIMedE7cMwUri7ELiHEMA_Ql2iQA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2420958835</pqid></control><display><type>article</type><title>Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><source>PubMed Central</source><creator>Nguyen, Hieu ; Wang, Yuzeng ; Wang, Zhaoyang</creator><creatorcontrib>Nguyen, Hieu ; Wang, Yuzeng ; Wang, Zhaoyang</creatorcontrib><description>Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s20133718</identifier><identifier>PMID: 32635144</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; Computer architecture ; Datasets ; Deep learning ; depth measurement ; fringe projection ; Light ; Machine learning ; Measurement techniques ; Neural networks ; Projectors ; structured light ; Three dimensional imaging ; three-dimensional image acquisition ; three-dimensional sensing ; three-dimensional shape reconstruction</subject><ispartof>Sensors (Basel, Switzerland), 2020-07, Vol.20 (13), p.3718</ispartof><rights>2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2020 by the authors. 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3</citedby><cites>FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3</cites><orcidid>0000-0002-5154-0125</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2420958835/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2420958835?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,725,778,782,883,25736,27907,27908,36995,36996,44573,53774,53776,74877</link.rule.ids></links><search><creatorcontrib>Nguyen, Hieu</creatorcontrib><creatorcontrib>Wang, Yuzeng</creatorcontrib><creatorcontrib>Wang, Zhaoyang</creatorcontrib><title>Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks</title><title>Sensors (Basel, Switzerland)</title><description>Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Computer architecture</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>depth measurement</subject><subject>fringe projection</subject><subject>Light</subject><subject>Machine learning</subject><subject>Measurement techniques</subject><subject>Neural networks</subject><subject>Projectors</subject><subject>structured light</subject><subject>Three dimensional imaging</subject><subject>three-dimensional image acquisition</subject><subject>three-dimensional sensing</subject><subject>three-dimensional shape reconstruction</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1v1DAQhi0EoqVw4B9Y4gKHUNszTuwLEtryUWlFpS49W44z2c2SjRc7KeLfk-5WVctpbM-jR-PRy9hbKT4CWHGelZAAlTTP2KlEhYVRSjx_dD5hr3LeCqEAwLxkJ6BK0BLxlPlVN6x7KlabOHK44KuN3xO_phCHPKYpjF0c-E2eIb463KdEDV92683I_dDwC6I9X8ThNvbTHet7_oOmdCjjn5h-5dfsRev7TG_u6xm7-frl5-J7sbz6drn4vCwCYjkWklqqLDZGoQYrQyltQKihVaElrFUQRtgmeAO1175uQ9C6tLKqS9nUaD2cscujt4l-6_ap2_n010XfucNDTGvn09iFnlxFIE3pQQtrkdDaFsBjqa1utdeimV2fjq79VO-oCTSM85eeSJ92hm7j1vHWVVAhGJwF7-8FKf6eKI9u1-VAfe8HilN2CpXEUmiQM_ruP3QbpzQv8kAJq40BPVMfjlRIMedE7cMwUri7ELiHEMA_Ql2iQA</recordid><startdate>20200703</startdate><enddate>20200703</enddate><creator>Nguyen, Hieu</creator><creator>Wang, Yuzeng</creator><creator>Wang, Zhaoyang</creator><general>MDPI AG</general><general>MDPI</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-5154-0125</orcidid></search><sort><creationdate>20200703</creationdate><title>Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks</title><author>Nguyen, Hieu ; Wang, Yuzeng ; Wang, Zhaoyang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Computer architecture</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>depth measurement</topic><topic>fringe projection</topic><topic>Light</topic><topic>Machine learning</topic><topic>Measurement techniques</topic><topic>Neural networks</topic><topic>Projectors</topic><topic>structured light</topic><topic>Three dimensional imaging</topic><topic>three-dimensional image acquisition</topic><topic>three-dimensional sensing</topic><topic>three-dimensional shape reconstruction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Hieu</creatorcontrib><creatorcontrib>Wang, Yuzeng</creatorcontrib><creatorcontrib>Wang, Zhaoyang</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest - Health & Medical Complete保健、医学与药学数据库</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Hieu</au><au>Wang, Yuzeng</au><au>Wang, Zhaoyang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><date>2020-07-03</date><risdate>2020</risdate><volume>20</volume><issue>13</issue><spage>3718</spage><pages>3718-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.</abstract><cop>Basel</cop><pub>MDPI AG</pub><pmid>32635144</pmid><doi>10.3390/s20133718</doi><orcidid>https://orcid.org/0000-0002-5154-0125</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2020-07, Vol.20 (13), p.3718 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_7e3186a350994e499f33a46595f5a50d |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3); PubMed Central |
subjects | Accuracy Algorithms Computer architecture Datasets Deep learning depth measurement fringe projection Light Machine learning Measurement techniques Neural networks Projectors structured light Three dimensional imaging three-dimensional image acquisition three-dimensional sensing three-dimensional shape reconstruction |
title | Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T07%3A41%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Single-Shot%203D%20Shape%20Reconstruction%20Using%20Structured%20Light%20and%20Deep%20Convolutional%20Neural%20Networks&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Nguyen,%20Hieu&rft.date=2020-07-03&rft.volume=20&rft.issue=13&rft.spage=3718&rft.pages=3718-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s20133718&rft_dat=%3Cproquest_doaj_%3E2421460531%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c446t-1efe794d8245391c619c43b3f2cfe4b2c0809dca83ba5abfcc556917b61db49a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2420958835&rft_id=info:pmid/32635144&rfr_iscdi=true |