Loading…

Learning to Dress 3D People in Generative Clothing

Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expres...

Full description

Saved in:
Bibliographic Details
Main Authors: Ma, Qianli, Yang, Jinlong, Ranjan, Anurag, Pujades, Sergi, Pons-Moll, Gerard, Tang, Siyu, Black, Michael J.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 6477
container_issue
container_start_page 6468
container_title
container_volume
creator Ma, Qianli
Yang, Jinlong
Ranjan, Anurag
Pujades, Sergi
Pons-Moll, Gerard
Tang, Siyu
Black, Michael J.
description Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.
doi_str_mv 10.1109/CVPR42600.2020.00650
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9157608</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9157608</ieee_id><sourcerecordid>9157608</sourcerecordid><originalsourceid>FETCH-LOGICAL-h349t-2d7156a86e4a62b462406f13f7fd9eecb8a790df32383e873ae912bc64b354e23</originalsourceid><addsrcrecordid>eNotjMFKw0AQQFdBsNR8gR72B1JnZndnd4-SahUCFlGvZdNMbCQmJQmCf29BD493eTylbhBWiBBvi_ftiyUGWBEQrADYwZnKog_o6QRycOdqQc673IN3lyqbpk8AMITIMSwUlZLGvu0_9Dzo9SjTpM1ab2U4dqLbXm-klzHN7bfoohvmw6m8UhdN6ibJ_r1Ubw_3r8VjXj5vnoq7Mj8YG-ecao-OU2CxiamyTBa4QdP4po4i-yokH6FuDJlgJHiTJCJVe7aVcVbILNX137cVkd1xbL_S-LOL6DxDML_TskRx</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Learning to Dress 3D People in Generative Clothing</title><source>IEEE Xplore All Conference Series</source><creator>Ma, Qianli ; Yang, Jinlong ; Ranjan, Anurag ; Pujades, Sergi ; Pons-Moll, Gerard ; Tang, Siyu ; Black, Michael J.</creator><creatorcontrib>Ma, Qianli ; Yang, Jinlong ; Ranjan, Anurag ; Pujades, Sergi ; Pons-Moll, Gerard ; Tang, Siyu ; Black, Michael J.</creatorcontrib><description>Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.</description><identifier>EISSN: 2575-7075</identifier><identifier>EISBN: 9781728171685</identifier><identifier>EISBN: 1728171687</identifier><identifier>DOI: 10.1109/CVPR42600.2020.00650</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Deformable models ; Image reconstruction ; Shape ; Solid modeling ; Strain ; Three-dimensional displays</subject><ispartof>2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, p.6468-6477</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9157608$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9157608$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ma, Qianli</creatorcontrib><creatorcontrib>Yang, Jinlong</creatorcontrib><creatorcontrib>Ranjan, Anurag</creatorcontrib><creatorcontrib>Pujades, Sergi</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><creatorcontrib>Tang, Siyu</creatorcontrib><creatorcontrib>Black, Michael J.</creatorcontrib><title>Learning to Dress 3D People in Generative Clothing</title><title>2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title><addtitle>CVPR</addtitle><description>Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.</description><subject>Deformable models</subject><subject>Image reconstruction</subject><subject>Shape</subject><subject>Solid modeling</subject><subject>Strain</subject><subject>Three-dimensional displays</subject><issn>2575-7075</issn><isbn>9781728171685</isbn><isbn>1728171687</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjMFKw0AQQFdBsNR8gR72B1JnZndnd4-SahUCFlGvZdNMbCQmJQmCf29BD493eTylbhBWiBBvi_ftiyUGWBEQrADYwZnKog_o6QRycOdqQc673IN3lyqbpk8AMITIMSwUlZLGvu0_9Dzo9SjTpM1ab2U4dqLbXm-klzHN7bfoohvmw6m8UhdN6ibJ_r1Ubw_3r8VjXj5vnoq7Mj8YG-ecao-OU2CxiamyTBa4QdP4po4i-yokH6FuDJlgJHiTJCJVe7aVcVbILNX137cVkd1xbL_S-LOL6DxDML_TskRx</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Ma, Qianli</creator><creator>Yang, Jinlong</creator><creator>Ranjan, Anurag</creator><creator>Pujades, Sergi</creator><creator>Pons-Moll, Gerard</creator><creator>Tang, Siyu</creator><creator>Black, Michael J.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20200101</creationdate><title>Learning to Dress 3D People in Generative Clothing</title><author>Ma, Qianli ; Yang, Jinlong ; Ranjan, Anurag ; Pujades, Sergi ; Pons-Moll, Gerard ; Tang, Siyu ; Black, Michael J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-h349t-2d7156a86e4a62b462406f13f7fd9eecb8a790df32383e873ae912bc64b354e23</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Deformable models</topic><topic>Image reconstruction</topic><topic>Shape</topic><topic>Solid modeling</topic><topic>Strain</topic><topic>Three-dimensional displays</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Qianli</creatorcontrib><creatorcontrib>Yang, Jinlong</creatorcontrib><creatorcontrib>Ranjan, Anurag</creatorcontrib><creatorcontrib>Pujades, Sergi</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><creatorcontrib>Tang, Siyu</creatorcontrib><creatorcontrib>Black, Michael J.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Qianli</au><au>Yang, Jinlong</au><au>Ranjan, Anurag</au><au>Pujades, Sergi</au><au>Pons-Moll, Gerard</au><au>Tang, Siyu</au><au>Black, Michael J.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Learning to Dress 3D People in Generative Clothing</atitle><btitle>2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</btitle><stitle>CVPR</stitle><date>2020-01-01</date><risdate>2020</risdate><spage>6468</spage><epage>6477</epage><pages>6468-6477</pages><eissn>2575-7075</eissn><eisbn>9781728171685</eisbn><eisbn>1728171687</eisbn><coden>IEEPAD</coden><abstract>Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.</abstract><pub>IEEE</pub><doi>10.1109/CVPR42600.2020.00650</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2575-7075
ispartof 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, p.6468-6477
issn 2575-7075
language eng
recordid cdi_ieee_primary_9157608
source IEEE Xplore All Conference Series
subjects Deformable models
Image reconstruction
Shape
Solid modeling
Strain
Three-dimensional displays
title Learning to Dress 3D People in Generative Clothing
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A06%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Learning%20to%20Dress%203D%20People%20in%20Generative%20Clothing&rft.btitle=2020%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20(CVPR)&rft.au=Ma,%20Qianli&rft.date=2020-01-01&rft.spage=6468&rft.epage=6477&rft.pages=6468-6477&rft.eissn=2575-7075&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPR42600.2020.00650&rft.eisbn=9781728171685&rft.eisbn_list=1728171687&rft_dat=%3Cieee_CHZPO%3E9157608%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-h349t-2d7156a86e4a62b462406f13f7fd9eecb8a790df32383e873ae912bc64b354e23%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9157608&rfr_iscdi=true