Loading…

Differentiable Neural Surface Refinement for Modeling Transparent Objects

Neural implicit surface reconstruction leveraging volume rendering has led to significant advances in multi-view reconstruction. However, results for transparent objects can be very poor, primarily because the rendering function fails to account for the intricate light transport induced by refractio...

Full description

Saved in:
Bibliographic Details
Main Authors: Deng, Weijian, Campbell, Dylan, Sun, Chunyi, Kanitkar, Shubham, Shaffer, Matthew E., Gould, Stephen
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 20277
container_issue
container_start_page 20268
container_title
container_volume
creator Deng, Weijian
Campbell, Dylan
Sun, Chunyi
Kanitkar, Shubham
Shaffer, Matthew E.
Gould, Stephen
description Neural implicit surface reconstruction leveraging volume rendering has led to significant advances in multi-view reconstruction. However, results for transparent objects can be very poor, primarily because the rendering function fails to account for the intricate light transport induced by refraction and reflection. In this study, we introduce trans-parent neural surface refinement (TNSR), a novel surface reconstruction framework that explicitly incorporates phys-ical refraction and reflection tracing. Beginning with an initial, approximate surface, our method employs sphere tracing combined with Snell's law to cast both reflected and refracted rays. Central to our proposal is an innovative differentiable technique devised to allow signals from the pho-tometric evidence to propagate back to the surface model by considering how the surface bends and reflects light rays. This allows us to connect surface refinement with volume rendering, enabling end-to-end optimization solely on multi-view RGB images. In our experiments, TNSR demonstrates significant improvements in novel view synthesis and geometry estimation of transparent objects, without prior knowledge of the refractive index.
doi_str_mv 10.1109/CVPR52733.2024.01916
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10657013</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10657013</ieee_id><sourcerecordid>10657013</sourcerecordid><originalsourceid>FETCH-ieee_primary_106570133</originalsourceid><addsrcrecordid>eNqFy8sOATEUgOGSSAjzBhZ9AeOcVqe6dgkLlwyxlc44lcoY0rLw9kjsrf7Fl5-xPkKKCGY4OWxzJbSUqQAxSgENZg2WGG3GUoFUEiBrso5QWg00aNVmSYwXAJACMTPjDltOvXMUqH54W1TE1_QMtuK7Z3C2JJ6T8zVdP8zdLfDV7USVr898H2wd7_b78U1xofIRe6zlbBUp-bXL-vPZfrIYeCI63oO_2vA6ImRKA0r5h99NLz_V</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Differentiable Neural Surface Refinement for Modeling Transparent Objects</title><source>IEEE Xplore All Conference Series</source><creator>Deng, Weijian ; Campbell, Dylan ; Sun, Chunyi ; Kanitkar, Shubham ; Shaffer, Matthew E. ; Gould, Stephen</creator><creatorcontrib>Deng, Weijian ; Campbell, Dylan ; Sun, Chunyi ; Kanitkar, Shubham ; Shaffer, Matthew E. ; Gould, Stephen</creatorcontrib><description>Neural implicit surface reconstruction leveraging volume rendering has led to significant advances in multi-view reconstruction. However, results for transparent objects can be very poor, primarily because the rendering function fails to account for the intricate light transport induced by refraction and reflection. In this study, we introduce trans-parent neural surface refinement (TNSR), a novel surface reconstruction framework that explicitly incorporates phys-ical refraction and reflection tracing. Beginning with an initial, approximate surface, our method employs sphere tracing combined with Snell's law to cast both reflected and refracted rays. Central to our proposal is an innovative differentiable technique devised to allow signals from the pho-tometric evidence to propagate back to the surface model by considering how the surface bends and reflects light rays. This allows us to connect surface refinement with volume rendering, enabling end-to-end optimization solely on multi-view RGB images. In our experiments, TNSR demonstrates significant improvements in novel view synthesis and geometry estimation of transparent objects, without prior knowledge of the refractive index.</description><identifier>EISSN: 2575-7075</identifier><identifier>EISBN: 9798350353006</identifier><identifier>DOI: 10.1109/CVPR52733.2024.01916</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>3D Reconstruction ; Neural Suface Refinement ; Optical imaging ; Optical refraction ; Optical variables control ; Reflection ; Refractive index ; Rendering (computer graphics) ; Surface reconstruction ; Transparent Objects</subject><ispartof>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.20268-20277</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10657013$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10657013$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Deng, Weijian</creatorcontrib><creatorcontrib>Campbell, Dylan</creatorcontrib><creatorcontrib>Sun, Chunyi</creatorcontrib><creatorcontrib>Kanitkar, Shubham</creatorcontrib><creatorcontrib>Shaffer, Matthew E.</creatorcontrib><creatorcontrib>Gould, Stephen</creatorcontrib><title>Differentiable Neural Surface Refinement for Modeling Transparent Objects</title><title>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title><addtitle>CVPR</addtitle><description>Neural implicit surface reconstruction leveraging volume rendering has led to significant advances in multi-view reconstruction. However, results for transparent objects can be very poor, primarily because the rendering function fails to account for the intricate light transport induced by refraction and reflection. In this study, we introduce trans-parent neural surface refinement (TNSR), a novel surface reconstruction framework that explicitly incorporates phys-ical refraction and reflection tracing. Beginning with an initial, approximate surface, our method employs sphere tracing combined with Snell's law to cast both reflected and refracted rays. Central to our proposal is an innovative differentiable technique devised to allow signals from the pho-tometric evidence to propagate back to the surface model by considering how the surface bends and reflects light rays. This allows us to connect surface refinement with volume rendering, enabling end-to-end optimization solely on multi-view RGB images. In our experiments, TNSR demonstrates significant improvements in novel view synthesis and geometry estimation of transparent objects, without prior knowledge of the refractive index.</description><subject>3D Reconstruction</subject><subject>Neural Suface Refinement</subject><subject>Optical imaging</subject><subject>Optical refraction</subject><subject>Optical variables control</subject><subject>Reflection</subject><subject>Refractive index</subject><subject>Rendering (computer graphics)</subject><subject>Surface reconstruction</subject><subject>Transparent Objects</subject><issn>2575-7075</issn><isbn>9798350353006</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNqFy8sOATEUgOGSSAjzBhZ9AeOcVqe6dgkLlwyxlc44lcoY0rLw9kjsrf7Fl5-xPkKKCGY4OWxzJbSUqQAxSgENZg2WGG3GUoFUEiBrso5QWg00aNVmSYwXAJACMTPjDltOvXMUqH54W1TE1_QMtuK7Z3C2JJ6T8zVdP8zdLfDV7USVr898H2wd7_b78U1xofIRe6zlbBUp-bXL-vPZfrIYeCI63oO_2vA6ImRKA0r5h99NLz_V</recordid><startdate>20240616</startdate><enddate>20240616</enddate><creator>Deng, Weijian</creator><creator>Campbell, Dylan</creator><creator>Sun, Chunyi</creator><creator>Kanitkar, Shubham</creator><creator>Shaffer, Matthew E.</creator><creator>Gould, Stephen</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240616</creationdate><title>Differentiable Neural Surface Refinement for Modeling Transparent Objects</title><author>Deng, Weijian ; Campbell, Dylan ; Sun, Chunyi ; Kanitkar, Shubham ; Shaffer, Matthew E. ; Gould, Stephen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_106570133</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>3D Reconstruction</topic><topic>Neural Suface Refinement</topic><topic>Optical imaging</topic><topic>Optical refraction</topic><topic>Optical variables control</topic><topic>Reflection</topic><topic>Refractive index</topic><topic>Rendering (computer graphics)</topic><topic>Surface reconstruction</topic><topic>Transparent Objects</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Weijian</creatorcontrib><creatorcontrib>Campbell, Dylan</creatorcontrib><creatorcontrib>Sun, Chunyi</creatorcontrib><creatorcontrib>Kanitkar, Shubham</creatorcontrib><creatorcontrib>Shaffer, Matthew E.</creatorcontrib><creatorcontrib>Gould, Stephen</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Deng, Weijian</au><au>Campbell, Dylan</au><au>Sun, Chunyi</au><au>Kanitkar, Shubham</au><au>Shaffer, Matthew E.</au><au>Gould, Stephen</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Differentiable Neural Surface Refinement for Modeling Transparent Objects</atitle><btitle>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</btitle><stitle>CVPR</stitle><date>2024-06-16</date><risdate>2024</risdate><spage>20268</spage><epage>20277</epage><pages>20268-20277</pages><eissn>2575-7075</eissn><eisbn>9798350353006</eisbn><coden>IEEPAD</coden><abstract>Neural implicit surface reconstruction leveraging volume rendering has led to significant advances in multi-view reconstruction. However, results for transparent objects can be very poor, primarily because the rendering function fails to account for the intricate light transport induced by refraction and reflection. In this study, we introduce trans-parent neural surface refinement (TNSR), a novel surface reconstruction framework that explicitly incorporates phys-ical refraction and reflection tracing. Beginning with an initial, approximate surface, our method employs sphere tracing combined with Snell's law to cast both reflected and refracted rays. Central to our proposal is an innovative differentiable technique devised to allow signals from the pho-tometric evidence to propagate back to the surface model by considering how the surface bends and reflects light rays. This allows us to connect surface refinement with volume rendering, enabling end-to-end optimization solely on multi-view RGB images. In our experiments, TNSR demonstrates significant improvements in novel view synthesis and geometry estimation of transparent objects, without prior knowledge of the refractive index.</abstract><pub>IEEE</pub><doi>10.1109/CVPR52733.2024.01916</doi></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2575-7075
ispartof 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.20268-20277
issn 2575-7075
language eng
recordid cdi_ieee_primary_10657013
source IEEE Xplore All Conference Series
subjects 3D Reconstruction
Neural Suface Refinement
Optical imaging
Optical refraction
Optical variables control
Reflection
Refractive index
Rendering (computer graphics)
Surface reconstruction
Transparent Objects
title Differentiable Neural Surface Refinement for Modeling Transparent Objects
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T06%3A54%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Differentiable%20Neural%20Surface%20Refinement%20for%20Modeling%20Transparent%20Objects&rft.btitle=2024%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20(CVPR)&rft.au=Deng,%20Weijian&rft.date=2024-06-16&rft.spage=20268&rft.epage=20277&rft.pages=20268-20277&rft.eissn=2575-7075&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPR52733.2024.01916&rft.eisbn=9798350353006&rft_dat=%3Cieee_CHZPO%3E10657013%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-ieee_primary_106570133%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10657013&rfr_iscdi=true