Loading…

Flexible SVBRDF Capture with a Multi‐Image Deep Network

Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is...

Full description

Saved in:
Bibliographic Details
Published in:Computer graphics forum 2019-07, Vol.38 (4), p.1-13
Main Authors: Deschaintre, Valentin, Aittala, Miika, Durand, Fredo, Drettakis, George, Bousseau, Adrien
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43
cites cdi_FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43
container_end_page 13
container_issue 4
container_start_page 1
container_title Computer graphics forum
container_volume 38
creator Deschaintre, Valentin
Aittala, Miika
Durand, Fredo
Drettakis, George
Bousseau, Adrien
description Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.
doi_str_mv 10.1111/cgf.13765
format article
fullrecord <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_02164993v2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2266288541</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43</originalsourceid><addsrcrecordid>eNp1kM1OwkAQxzdGExE9-AZNPHko7G67X0csFEhQE7-um2WdQrHYum1Fbj6Cz-iTWKzRk3OZyeQ3v0z-CJ0S3CNN9e0i6ZFAcLaHOiTkwpecqX3UwaSZBWbsEB2V5QpjHDZQB6k4g7d0noF3-3BxM4y9yBRV7cDbpNXSM95lnVXp5_vHdG0W4A0BCu8Kqk3uno7RQWKyEk5-ehfdx6O7aOLPrsfTaDDzbcA580PyyAxITJmdJ0JJSueKQ2IYk1ZaEnAMKuFSYKECbiU1GHNuQQhuaWCTMOii89a7NJkuXLo2bqtzk-rJYKZ3O0wJD5UKXmnDnrVs4fKXGspKr_LaPTfvaUo5p1KykPwZrcvL0kHyqyVY71LUTYr6O8WG7bfsJs1g-z-oo3HcXnwBlCxwVw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2266288541</pqid></control><display><type>article</type><title>Flexible SVBRDF Capture with a Multi‐Image Deep Network</title><source>Business Source Ultimate</source><source>EBSCOhost Art &amp; Architecture Source</source><source>Wiley-Blackwell Read &amp; Publish Collection</source><creator>Deschaintre, Valentin ; Aittala, Miika ; Durand, Fredo ; Drettakis, George ; Bousseau, Adrien</creator><creatorcontrib>Deschaintre, Valentin ; Aittala, Miika ; Durand, Fredo ; Drettakis, George ; Bousseau, Adrien</creatorcontrib><description>Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.</description><identifier>ISSN: 0167-7055</identifier><identifier>EISSN: 1467-8659</identifier><identifier>DOI: 10.1111/cgf.13765</identifier><language>eng</language><publisher>Oxford: Blackwell Publishing Ltd</publisher><subject>Appearance capture ; CCS Concepts ; Computer graphics ; Computer Science ; Computing methodologies → Reflectance modeling ; Deep learning ; Image Processing ; Machine learning ; Material capture ; Optimization ; Pictures ; Reflectance ; SVBRDF</subject><ispartof>Computer graphics forum, 2019-07, Vol.38 (4), p.1-13</ispartof><rights>2019 The Author(s) Computer Graphics Forum © 2019 The Eurographics Association and John Wiley &amp; Sons Ltd. Published by John Wiley &amp; Sons Ltd.</rights><rights>2019 The Eurographics Association and John Wiley &amp; Sons Ltd.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43</citedby><cites>FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43</cites><orcidid>0000-0002-8003-9575</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,27901,27902</link.rule.ids><backlink>$$Uhttps://hal.science/hal-02164993$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Deschaintre, Valentin</creatorcontrib><creatorcontrib>Aittala, Miika</creatorcontrib><creatorcontrib>Durand, Fredo</creatorcontrib><creatorcontrib>Drettakis, George</creatorcontrib><creatorcontrib>Bousseau, Adrien</creatorcontrib><title>Flexible SVBRDF Capture with a Multi‐Image Deep Network</title><title>Computer graphics forum</title><description>Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.</description><subject>Appearance capture</subject><subject>CCS Concepts</subject><subject>Computer graphics</subject><subject>Computer Science</subject><subject>Computing methodologies → Reflectance modeling</subject><subject>Deep learning</subject><subject>Image Processing</subject><subject>Machine learning</subject><subject>Material capture</subject><subject>Optimization</subject><subject>Pictures</subject><subject>Reflectance</subject><subject>SVBRDF</subject><issn>0167-7055</issn><issn>1467-8659</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNp1kM1OwkAQxzdGExE9-AZNPHko7G67X0csFEhQE7-um2WdQrHYum1Fbj6Cz-iTWKzRk3OZyeQ3v0z-CJ0S3CNN9e0i6ZFAcLaHOiTkwpecqX3UwaSZBWbsEB2V5QpjHDZQB6k4g7d0noF3-3BxM4y9yBRV7cDbpNXSM95lnVXp5_vHdG0W4A0BCu8Kqk3uno7RQWKyEk5-ehfdx6O7aOLPrsfTaDDzbcA580PyyAxITJmdJ0JJSueKQ2IYk1ZaEnAMKuFSYKECbiU1GHNuQQhuaWCTMOii89a7NJkuXLo2bqtzk-rJYKZ3O0wJD5UKXmnDnrVs4fKXGspKr_LaPTfvaUo5p1KykPwZrcvL0kHyqyVY71LUTYr6O8WG7bfsJs1g-z-oo3HcXnwBlCxwVw</recordid><startdate>201907</startdate><enddate>201907</enddate><creator>Deschaintre, Valentin</creator><creator>Aittala, Miika</creator><creator>Durand, Fredo</creator><creator>Drettakis, George</creator><creator>Bousseau, Adrien</creator><general>Blackwell Publishing Ltd</general><general>Wiley</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0002-8003-9575</orcidid></search><sort><creationdate>201907</creationdate><title>Flexible SVBRDF Capture with a Multi‐Image Deep Network</title><author>Deschaintre, Valentin ; Aittala, Miika ; Durand, Fredo ; Drettakis, George ; Bousseau, Adrien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Appearance capture</topic><topic>CCS Concepts</topic><topic>Computer graphics</topic><topic>Computer Science</topic><topic>Computing methodologies → Reflectance modeling</topic><topic>Deep learning</topic><topic>Image Processing</topic><topic>Machine learning</topic><topic>Material capture</topic><topic>Optimization</topic><topic>Pictures</topic><topic>Reflectance</topic><topic>SVBRDF</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Deschaintre, Valentin</creatorcontrib><creatorcontrib>Aittala, Miika</creatorcontrib><creatorcontrib>Durand, Fredo</creatorcontrib><creatorcontrib>Drettakis, George</creatorcontrib><creatorcontrib>Bousseau, Adrien</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Computer graphics forum</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deschaintre, Valentin</au><au>Aittala, Miika</au><au>Durand, Fredo</au><au>Drettakis, George</au><au>Bousseau, Adrien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Flexible SVBRDF Capture with a Multi‐Image Deep Network</atitle><jtitle>Computer graphics forum</jtitle><date>2019-07</date><risdate>2019</risdate><volume>38</volume><issue>4</issue><spage>1</spage><epage>13</epage><pages>1-13</pages><issn>0167-7055</issn><eissn>1467-8659</eissn><abstract>Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.</abstract><cop>Oxford</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/cgf.13765</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-8003-9575</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0167-7055
ispartof Computer graphics forum, 2019-07, Vol.38 (4), p.1-13
issn 0167-7055
1467-8659
language eng
recordid cdi_hal_primary_oai_HAL_hal_02164993v2
source Business Source Ultimate; EBSCOhost Art & Architecture Source; Wiley-Blackwell Read & Publish Collection
subjects Appearance capture
CCS Concepts
Computer graphics
Computer Science
Computing methodologies → Reflectance modeling
Deep learning
Image Processing
Machine learning
Material capture
Optimization
Pictures
Reflectance
SVBRDF
title Flexible SVBRDF Capture with a Multi‐Image Deep Network
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T21%3A47%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Flexible%20SVBRDF%20Capture%20with%20a%20Multi%E2%80%90Image%20Deep%20Network&rft.jtitle=Computer%20graphics%20forum&rft.au=Deschaintre,%20Valentin&rft.date=2019-07&rft.volume=38&rft.issue=4&rft.spage=1&rft.epage=13&rft.pages=1-13&rft.issn=0167-7055&rft.eissn=1467-8659&rft_id=info:doi/10.1111/cgf.13765&rft_dat=%3Cproquest_hal_p%3E2266288541%3C/proquest_hal_p%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3665-41d5ae8025cbf79822b96efa558c8c1360e9f68707936c82a0066ce776c23cf43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2266288541&rft_id=info:pmid/&rfr_iscdi=true