Loading…

Creating and Reenacting Controllable 3D Humans with Differentiable Rendering

This paper proposes a new end-to-end neural rendering architecture to transfer appearance and reenact human actors. Our method leverages a carefully designed graph convolutional network (GCN) to model the human body manifold structure, jointly with differentiable rendering, to synthesize new videos...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-10
Main Authors: Gomes, Thiago L, Coutinho, Thiago M, Azevedo, Rafael, Martins, Renato, Nascimento, Erickson R
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gomes, Thiago L
Coutinho, Thiago M
Azevedo, Rafael
Martins, Renato
Nascimento, Erickson R
description This paper proposes a new end-to-end neural rendering architecture to transfer appearance and reenact human actors. Our method leverages a carefully designed graph convolutional network (GCN) to model the human body manifold structure, jointly with differentiable rendering, to synthesize new videos of people in different contexts from where they were initially recorded. Unlike recent appearance transferring methods, our approach can reconstruct a fully controllable 3D texture-mapped model of a person, while taking into account the manifold structure from body shape and texture appearance in the view synthesis. Specifically, our approach models mesh deformations with a three-stage GCN trained in a self-supervised manner on rendered silhouettes of the human body. It also infers texture appearance with a convolutional network in the texture domain, which is trained in an adversarial regime to reconstruct human texture from rendered images of actors in different poses. Experiments on different videos show that our method successfully infers specific body deformations and avoid creating texture artifacts while achieving the best values for appearance in terms of Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS), Mean Squared Error (MSE), and Fréchet Video Distance (FVD). By taking advantages of both differentiable rendering and the 3D parametric model, our method is fully controllable, which allows controlling the human synthesis from both pose and rendering parameters. The source code is available at https://www.verlab.dcc.ufmg.br/retargeting-motion/wacv2022.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2585639527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2585639527</sourcerecordid><originalsourceid>FETCH-proquest_journals_25856395273</originalsourceid><addsrcrecordid>eNqNi8EKgkAURYcgSMp_GGgt2JtGba2Fi1bSXqZ81sj0pmZG-v1E-oBWh8u5Z8EiEGKXFHuAFYu9H9I0hSwHKUXEzqVDFTTduaKON4ikbvMsLQVnjVFXg1xUvB6fijz_6PDgle57dEhBz7ZB6tBN0YYte2U8xj-u2fZ0vJR18nL2PaIP7WBHR5NqQRYyEwcJufjv9QUlCz1L</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2585639527</pqid></control><display><type>article</type><title>Creating and Reenacting Controllable 3D Humans with Differentiable Rendering</title><source>Publicly Available Content Database</source><creator>Gomes, Thiago L ; Coutinho, Thiago M ; Azevedo, Rafael ; Martins, Renato ; Nascimento, Erickson R</creator><creatorcontrib>Gomes, Thiago L ; Coutinho, Thiago M ; Azevedo, Rafael ; Martins, Renato ; Nascimento, Erickson R</creatorcontrib><description>This paper proposes a new end-to-end neural rendering architecture to transfer appearance and reenact human actors. Our method leverages a carefully designed graph convolutional network (GCN) to model the human body manifold structure, jointly with differentiable rendering, to synthesize new videos of people in different contexts from where they were initially recorded. Unlike recent appearance transferring methods, our approach can reconstruct a fully controllable 3D texture-mapped model of a person, while taking into account the manifold structure from body shape and texture appearance in the view synthesis. Specifically, our approach models mesh deformations with a three-stage GCN trained in a self-supervised manner on rendered silhouettes of the human body. It also infers texture appearance with a convolutional network in the texture domain, which is trained in an adversarial regime to reconstruct human texture from rendered images of actors in different poses. Experiments on different videos show that our method successfully infers specific body deformations and avoid creating texture artifacts while achieving the best values for appearance in terms of Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS), Mean Squared Error (MSE), and Fréchet Video Distance (FVD). By taking advantages of both differentiable rendering and the 3D parametric model, our method is fully controllable, which allows controlling the human synthesis from both pose and rendering parameters. The source code is available at https://www.verlab.dcc.ufmg.br/retargeting-motion/wacv2022.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Controllability ; Deformation ; Finite element method ; Human body ; Image reconstruction ; Manifolds ; Rendering ; Similarity ; Source code ; Synthesis ; Texture ; Three dimensional models ; Video</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2585639527?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Gomes, Thiago L</creatorcontrib><creatorcontrib>Coutinho, Thiago M</creatorcontrib><creatorcontrib>Azevedo, Rafael</creatorcontrib><creatorcontrib>Martins, Renato</creatorcontrib><creatorcontrib>Nascimento, Erickson R</creatorcontrib><title>Creating and Reenacting Controllable 3D Humans with Differentiable Rendering</title><title>arXiv.org</title><description>This paper proposes a new end-to-end neural rendering architecture to transfer appearance and reenact human actors. Our method leverages a carefully designed graph convolutional network (GCN) to model the human body manifold structure, jointly with differentiable rendering, to synthesize new videos of people in different contexts from where they were initially recorded. Unlike recent appearance transferring methods, our approach can reconstruct a fully controllable 3D texture-mapped model of a person, while taking into account the manifold structure from body shape and texture appearance in the view synthesis. Specifically, our approach models mesh deformations with a three-stage GCN trained in a self-supervised manner on rendered silhouettes of the human body. It also infers texture appearance with a convolutional network in the texture domain, which is trained in an adversarial regime to reconstruct human texture from rendered images of actors in different poses. Experiments on different videos show that our method successfully infers specific body deformations and avoid creating texture artifacts while achieving the best values for appearance in terms of Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS), Mean Squared Error (MSE), and Fréchet Video Distance (FVD). By taking advantages of both differentiable rendering and the 3D parametric model, our method is fully controllable, which allows controlling the human synthesis from both pose and rendering parameters. The source code is available at https://www.verlab.dcc.ufmg.br/retargeting-motion/wacv2022.</description><subject>Controllability</subject><subject>Deformation</subject><subject>Finite element method</subject><subject>Human body</subject><subject>Image reconstruction</subject><subject>Manifolds</subject><subject>Rendering</subject><subject>Similarity</subject><subject>Source code</subject><subject>Synthesis</subject><subject>Texture</subject><subject>Three dimensional models</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi8EKgkAURYcgSMp_GGgt2JtGba2Fi1bSXqZ81sj0pmZG-v1E-oBWh8u5Z8EiEGKXFHuAFYu9H9I0hSwHKUXEzqVDFTTduaKON4ikbvMsLQVnjVFXg1xUvB6fijz_6PDgle57dEhBz7ZB6tBN0YYte2U8xj-u2fZ0vJR18nL2PaIP7WBHR5NqQRYyEwcJufjv9QUlCz1L</recordid><startdate>20211022</startdate><enddate>20211022</enddate><creator>Gomes, Thiago L</creator><creator>Coutinho, Thiago M</creator><creator>Azevedo, Rafael</creator><creator>Martins, Renato</creator><creator>Nascimento, Erickson R</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211022</creationdate><title>Creating and Reenacting Controllable 3D Humans with Differentiable Rendering</title><author>Gomes, Thiago L ; Coutinho, Thiago M ; Azevedo, Rafael ; Martins, Renato ; Nascimento, Erickson R</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25856395273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Controllability</topic><topic>Deformation</topic><topic>Finite element method</topic><topic>Human body</topic><topic>Image reconstruction</topic><topic>Manifolds</topic><topic>Rendering</topic><topic>Similarity</topic><topic>Source code</topic><topic>Synthesis</topic><topic>Texture</topic><topic>Three dimensional models</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Gomes, Thiago L</creatorcontrib><creatorcontrib>Coutinho, Thiago M</creatorcontrib><creatorcontrib>Azevedo, Rafael</creatorcontrib><creatorcontrib>Martins, Renato</creatorcontrib><creatorcontrib>Nascimento, Erickson R</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gomes, Thiago L</au><au>Coutinho, Thiago M</au><au>Azevedo, Rafael</au><au>Martins, Renato</au><au>Nascimento, Erickson R</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Creating and Reenacting Controllable 3D Humans with Differentiable Rendering</atitle><jtitle>arXiv.org</jtitle><date>2021-10-22</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>This paper proposes a new end-to-end neural rendering architecture to transfer appearance and reenact human actors. Our method leverages a carefully designed graph convolutional network (GCN) to model the human body manifold structure, jointly with differentiable rendering, to synthesize new videos of people in different contexts from where they were initially recorded. Unlike recent appearance transferring methods, our approach can reconstruct a fully controllable 3D texture-mapped model of a person, while taking into account the manifold structure from body shape and texture appearance in the view synthesis. Specifically, our approach models mesh deformations with a three-stage GCN trained in a self-supervised manner on rendered silhouettes of the human body. It also infers texture appearance with a convolutional network in the texture domain, which is trained in an adversarial regime to reconstruct human texture from rendered images of actors in different poses. Experiments on different videos show that our method successfully infers specific body deformations and avoid creating texture artifacts while achieving the best values for appearance in terms of Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS), Mean Squared Error (MSE), and Fréchet Video Distance (FVD). By taking advantages of both differentiable rendering and the 3D parametric model, our method is fully controllable, which allows controlling the human synthesis from both pose and rendering parameters. The source code is available at https://www.verlab.dcc.ufmg.br/retargeting-motion/wacv2022.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2585639527
source Publicly Available Content Database
subjects Controllability
Deformation
Finite element method
Human body
Image reconstruction
Manifolds
Rendering
Similarity
Source code
Synthesis
Texture
Three dimensional models
Video
title Creating and Reenacting Controllable 3D Humans with Differentiable Rendering
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T21%3A23%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Creating%20and%20Reenacting%20Controllable%203D%20Humans%20with%20Differentiable%20Rendering&rft.jtitle=arXiv.org&rft.au=Gomes,%20Thiago%20L&rft.date=2021-10-22&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2585639527%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25856395273%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2585639527&rft_id=info:pmid/&rfr_iscdi=true