Loading…

NPMs: Neural Parametric Models for 3D Deformable Shapes

Parametric 3D models have enabled a wide variety of tasks in computer graphics and vision, such as modeling human bodies, faces, and hands. However, the construction of these parametric models is often tedious, as it requires heavy manual tweaking, and they struggle to represent additional complexit...

Full description

Saved in:
Bibliographic Details
Main Authors: Palafox, Pablo, Bozic, Aljaz, Thies, Justus, Niesner, Matthias, Dai, Angela
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 12685
container_issue
container_start_page 12675
container_title
container_volume
creator Palafox, Pablo
Bozic, Aljaz
Thies, Justus
Niesner, Matthias
Dai, Angela
description Parametric 3D models have enabled a wide variety of tasks in computer graphics and vision, such as modeling human bodies, faces, and hands. However, the construction of these parametric models is often tedious, as it requires heavy manual tweaking, and they struggle to represent additional complexity and details such as wrinkles or clothing. To this end, we propose Neural Parametric Models (NPMs), a novel, learned alternative to traditional, parametric 3D models, which does not require handcrafted, object-specific constraints. In particular, we learn to disentangle 4D dynamics into latent-space representations of shape and pose, leveraging the flexibility of recent developments in learned implicit functions. Crucially, once learned, our neural parametric models of shape and pose enable optimization over the learned spaces to fit to new observations, similar to the fitting of a traditional parametric model, e.g., SMPL. This enables NPMs to achieve a significantly more accurate and detailed representation of observed deformable sequences. We show that NPMs improve notably over both parametric and non-parametric state of the art in reconstruction and tracking of monocular depth sequences of clothed humans and hands. Latent-space interpolation as well as shape / pose transfer experiments further demonstrate the usefulness of NPMs. Code is publicly available at https://pablopalafox.github.io/npms.
doi_str_mv 10.1109/ICCV48922.2021.01246
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9710325</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9710325</ieee_id><sourcerecordid>9710325</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-fb9bb2decb94aeb650742f6a2babf8adac6fc7b947fc8bcce1a506f12ae62db83</originalsourceid><addsrcrecordid>eNotjMtKAzEUQKMgWGu_QBf5gRmTm8nLnUytFtpa8LEtN5kbHJmhJakL_96Crs6BA4exWylqKYW_W7btR-M8QA0CZC0kNOaMzbx10hjdgJOgz9kElBOV1aK5ZFelfAmhPDgzYXazXZd7vqHvjAPfYsaRjrmPfL3vaCg87TNXcz6nk4wYBuKvn3igcs0uEg6FZv-csvfF41v7XK1enpbtw6rqQahjlYIPATqKwTdIwWhhG0gGIWBIDjuMJkV7ijZFF2IkiVqYJAHJQBecmrKbv29PRLtD7kfMPztvpVCg1S8s8EeJ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>NPMs: Neural Parametric Models for 3D Deformable Shapes</title><source>IEEE Xplore All Conference Series</source><creator>Palafox, Pablo ; Bozic, Aljaz ; Thies, Justus ; Niesner, Matthias ; Dai, Angela</creator><creatorcontrib>Palafox, Pablo ; Bozic, Aljaz ; Thies, Justus ; Niesner, Matthias ; Dai, Angela</creatorcontrib><description>Parametric 3D models have enabled a wide variety of tasks in computer graphics and vision, such as modeling human bodies, faces, and hands. However, the construction of these parametric models is often tedious, as it requires heavy manual tweaking, and they struggle to represent additional complexity and details such as wrinkles or clothing. To this end, we propose Neural Parametric Models (NPMs), a novel, learned alternative to traditional, parametric 3D models, which does not require handcrafted, object-specific constraints. In particular, we learn to disentangle 4D dynamics into latent-space representations of shape and pose, leveraging the flexibility of recent developments in learned implicit functions. Crucially, once learned, our neural parametric models of shape and pose enable optimization over the learned spaces to fit to new observations, similar to the fitting of a traditional parametric model, e.g., SMPL. This enables NPMs to achieve a significantly more accurate and detailed representation of observed deformable sequences. We show that NPMs improve notably over both parametric and non-parametric state of the art in reconstruction and tracking of monocular depth sequences of clothed humans and hands. Latent-space interpolation as well as shape / pose transfer experiments further demonstrate the usefulness of NPMs. Code is publicly available at https://pablopalafox.github.io/npms.</description><identifier>EISSN: 2380-7504</identifier><identifier>EISBN: 9781665428125</identifier><identifier>EISBN: 1665428120</identifier><identifier>DOI: 10.1109/ICCV48922.2021.01246</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>3D from a single image and shape-from-x ; 3D from multiview and other sensors ; Codes ; Computational modeling ; Fitting ; Interpolation ; Shape ; Solid modeling ; Stereo ; Three-dimensional displays</subject><ispartof>2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.12675-12685</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9710325$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27924,54554,54931</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9710325$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Palafox, Pablo</creatorcontrib><creatorcontrib>Bozic, Aljaz</creatorcontrib><creatorcontrib>Thies, Justus</creatorcontrib><creatorcontrib>Niesner, Matthias</creatorcontrib><creatorcontrib>Dai, Angela</creatorcontrib><title>NPMs: Neural Parametric Models for 3D Deformable Shapes</title><title>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</title><addtitle>ICCV</addtitle><description>Parametric 3D models have enabled a wide variety of tasks in computer graphics and vision, such as modeling human bodies, faces, and hands. However, the construction of these parametric models is often tedious, as it requires heavy manual tweaking, and they struggle to represent additional complexity and details such as wrinkles or clothing. To this end, we propose Neural Parametric Models (NPMs), a novel, learned alternative to traditional, parametric 3D models, which does not require handcrafted, object-specific constraints. In particular, we learn to disentangle 4D dynamics into latent-space representations of shape and pose, leveraging the flexibility of recent developments in learned implicit functions. Crucially, once learned, our neural parametric models of shape and pose enable optimization over the learned spaces to fit to new observations, similar to the fitting of a traditional parametric model, e.g., SMPL. This enables NPMs to achieve a significantly more accurate and detailed representation of observed deformable sequences. We show that NPMs improve notably over both parametric and non-parametric state of the art in reconstruction and tracking of monocular depth sequences of clothed humans and hands. Latent-space interpolation as well as shape / pose transfer experiments further demonstrate the usefulness of NPMs. Code is publicly available at https://pablopalafox.github.io/npms.</description><subject>3D from a single image and shape-from-x</subject><subject>3D from multiview and other sensors</subject><subject>Codes</subject><subject>Computational modeling</subject><subject>Fitting</subject><subject>Interpolation</subject><subject>Shape</subject><subject>Solid modeling</subject><subject>Stereo</subject><subject>Three-dimensional displays</subject><issn>2380-7504</issn><isbn>9781665428125</isbn><isbn>1665428120</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjMtKAzEUQKMgWGu_QBf5gRmTm8nLnUytFtpa8LEtN5kbHJmhJakL_96Crs6BA4exWylqKYW_W7btR-M8QA0CZC0kNOaMzbx10hjdgJOgz9kElBOV1aK5ZFelfAmhPDgzYXazXZd7vqHvjAPfYsaRjrmPfL3vaCg87TNXcz6nk4wYBuKvn3igcs0uEg6FZv-csvfF41v7XK1enpbtw6rqQahjlYIPATqKwTdIwWhhG0gGIWBIDjuMJkV7ijZFF2IkiVqYJAHJQBecmrKbv29PRLtD7kfMPztvpVCg1S8s8EeJ</recordid><startdate>202110</startdate><enddate>202110</enddate><creator>Palafox, Pablo</creator><creator>Bozic, Aljaz</creator><creator>Thies, Justus</creator><creator>Niesner, Matthias</creator><creator>Dai, Angela</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202110</creationdate><title>NPMs: Neural Parametric Models for 3D Deformable Shapes</title><author>Palafox, Pablo ; Bozic, Aljaz ; Thies, Justus ; Niesner, Matthias ; Dai, Angela</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-fb9bb2decb94aeb650742f6a2babf8adac6fc7b947fc8bcce1a506f12ae62db83</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>3D from a single image and shape-from-x</topic><topic>3D from multiview and other sensors</topic><topic>Codes</topic><topic>Computational modeling</topic><topic>Fitting</topic><topic>Interpolation</topic><topic>Shape</topic><topic>Solid modeling</topic><topic>Stereo</topic><topic>Three-dimensional displays</topic><toplevel>online_resources</toplevel><creatorcontrib>Palafox, Pablo</creatorcontrib><creatorcontrib>Bozic, Aljaz</creatorcontrib><creatorcontrib>Thies, Justus</creatorcontrib><creatorcontrib>Niesner, Matthias</creatorcontrib><creatorcontrib>Dai, Angela</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Palafox, Pablo</au><au>Bozic, Aljaz</au><au>Thies, Justus</au><au>Niesner, Matthias</au><au>Dai, Angela</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>NPMs: Neural Parametric Models for 3D Deformable Shapes</atitle><btitle>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</btitle><stitle>ICCV</stitle><date>2021-10</date><risdate>2021</risdate><spage>12675</spage><epage>12685</epage><pages>12675-12685</pages><eissn>2380-7504</eissn><eisbn>9781665428125</eisbn><eisbn>1665428120</eisbn><coden>IEEPAD</coden><abstract>Parametric 3D models have enabled a wide variety of tasks in computer graphics and vision, such as modeling human bodies, faces, and hands. However, the construction of these parametric models is often tedious, as it requires heavy manual tweaking, and they struggle to represent additional complexity and details such as wrinkles or clothing. To this end, we propose Neural Parametric Models (NPMs), a novel, learned alternative to traditional, parametric 3D models, which does not require handcrafted, object-specific constraints. In particular, we learn to disentangle 4D dynamics into latent-space representations of shape and pose, leveraging the flexibility of recent developments in learned implicit functions. Crucially, once learned, our neural parametric models of shape and pose enable optimization over the learned spaces to fit to new observations, similar to the fitting of a traditional parametric model, e.g., SMPL. This enables NPMs to achieve a significantly more accurate and detailed representation of observed deformable sequences. We show that NPMs improve notably over both parametric and non-parametric state of the art in reconstruction and tracking of monocular depth sequences of clothed humans and hands. Latent-space interpolation as well as shape / pose transfer experiments further demonstrate the usefulness of NPMs. Code is publicly available at https://pablopalafox.github.io/npms.</abstract><pub>IEEE</pub><doi>10.1109/ICCV48922.2021.01246</doi><tpages>11</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2380-7504
ispartof 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.12675-12685
issn 2380-7504
language eng
recordid cdi_ieee_primary_9710325
source IEEE Xplore All Conference Series
subjects 3D from a single image and shape-from-x
3D from multiview and other sensors
Codes
Computational modeling
Fitting
Interpolation
Shape
Solid modeling
Stereo
Three-dimensional displays
title NPMs: Neural Parametric Models for 3D Deformable Shapes
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T18%3A00%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=NPMs:%20Neural%20Parametric%20Models%20for%203D%20Deformable%20Shapes&rft.btitle=2021%20IEEE/CVF%20International%20Conference%20on%20Computer%20Vision%20(ICCV)&rft.au=Palafox,%20Pablo&rft.date=2021-10&rft.spage=12675&rft.epage=12685&rft.pages=12675-12685&rft.eissn=2380-7504&rft.coden=IEEPAD&rft_id=info:doi/10.1109/ICCV48922.2021.01246&rft.eisbn=9781665428125&rft.eisbn_list=1665428120&rft_dat=%3Cieee_CHZPO%3E9710325%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-fb9bb2decb94aeb650742f6a2babf8adac6fc7b947fc8bcce1a506f12ae62db83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9710325&rfr_iscdi=true