Loading…

HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge

This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 sync...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence 2023-01, Vol.45 (1), p.623-640
Main Authors: Yoon, Jae Shin, Yu, Zhixuan, Park, Jaesik, Park, Hyun Soo
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83
cites cdi_FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83
container_end_page 640
container_issue 1
container_start_page 623
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 45
creator Yoon, Jae Shin
Yu, Zhixuan
Park, Jaesik
Park, Hyun Soo
description This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and style. With the multiview image streams, we reconstruct the geometry of body expressions using 3D mesh models, which allows representing view-specific appearance. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI, we formulate a new benchmark challenge of a pose-guided appearance rendering task that aims to substantially extend photorealism in modeling diverse human expressions in 3D, which is the key enabling factor of authentic social tele-presence. HUMBI is publicly available at http://humbi-data.net .
doi_str_mv 10.1109/TPAMI.2021.3138762
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TPAMI_2021_3138762</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9664343</ieee_id><sourcerecordid>2747609929</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83</originalsourceid><addsrcrecordid>eNpdkMtOwzAQRS0EgvL4AZCQJTZsUjy248Ts2lJopVYgVNaWE08gkCYlTnj8PYEWFqxmMedezRxCjoH1AZi-WNwN5tM-Zxz6AkQcKb5FeqCFDkQo9DbpMVA8iGMe75F9758ZAxkysUv2hNSKx4r3yP3kYT6cXtIBndn6Eem8LZr8Lcd3emUb67GhVUYn7dKWdFi5Tzr-WNXofV6VntrS0SGW6dPS1i909GSLAstHPCQ7mS08Hm3mAVlcjxejSTC7vZmOBrMgFSE0AaYxoMpUZKVzLgHGMrAs1BAlacIilgnOHMhEJly6RGdCOe0iSG1H6yQWB-R8Xbuqq9cWfWOWuU-xKGyJVesNVxByrWMRdejZP_S5auuyO87wSEaKac11R_E1ldaV9zVmZlXn3WufBpj5Fm5-hJtv4WYjvAudbqrbZInuL_JruANO1kCOiH9rrZQUUogv_9mDOA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2747609929</pqid></control><display><type>article</type><title>HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Yoon, Jae Shin ; Yu, Zhixuan ; Park, Jaesik ; Park, Hyun Soo</creator><creatorcontrib>Yoon, Jae Shin ; Yu, Zhixuan ; Park, Jaesik ; Park, Hyun Soo</creatorcontrib><description>This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and style. With the multiview image streams, we reconstruct the geometry of body expressions using 3D mesh models, which allows representing view-specific appearance. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI, we formulate a new benchmark challenge of a pose-guided appearance rendering task that aims to substantially extend photorealism in modeling diverse human expressions in 3D, which is the key enabling factor of authentic social tele-presence. HUMBI is publicly available at http://humbi-data.net .</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2021.3138762</identifier><identifier>PMID: 34962862</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>3D geometry and appearance ; Algorithms ; Benchmarking ; Benchmarks ; Biological system modeling ; Cameras ; Datasets ; Faces ; Finite element method ; Geometry ; Human behavioral imaging ; Human Body ; Humans ; Image reconstruction ; Learning ; Modelling ; multiview dataset ; Solid modeling ; Three dimensional models ; Three-dimensional displays</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2023-01, Vol.45 (1), p.623-640</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83</citedby><cites>FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83</cites><orcidid>0000-0001-5541-409X ; 0000-0003-0181-4869 ; 0000-0001-6613-0738 ; 0000-0002-3065-6962</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9664343$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,54771</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34962862$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yoon, Jae Shin</creatorcontrib><creatorcontrib>Yu, Zhixuan</creatorcontrib><creatorcontrib>Park, Jaesik</creatorcontrib><creatorcontrib>Park, Hyun Soo</creatorcontrib><title>HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and style. With the multiview image streams, we reconstruct the geometry of body expressions using 3D mesh models, which allows representing view-specific appearance. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI, we formulate a new benchmark challenge of a pose-guided appearance rendering task that aims to substantially extend photorealism in modeling diverse human expressions in 3D, which is the key enabling factor of authentic social tele-presence. HUMBI is publicly available at http://humbi-data.net .</description><subject>3D geometry and appearance</subject><subject>Algorithms</subject><subject>Benchmarking</subject><subject>Benchmarks</subject><subject>Biological system modeling</subject><subject>Cameras</subject><subject>Datasets</subject><subject>Faces</subject><subject>Finite element method</subject><subject>Geometry</subject><subject>Human behavioral imaging</subject><subject>Human Body</subject><subject>Humans</subject><subject>Image reconstruction</subject><subject>Learning</subject><subject>Modelling</subject><subject>multiview dataset</subject><subject>Solid modeling</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpdkMtOwzAQRS0EgvL4AZCQJTZsUjy248Ts2lJopVYgVNaWE08gkCYlTnj8PYEWFqxmMedezRxCjoH1AZi-WNwN5tM-Zxz6AkQcKb5FeqCFDkQo9DbpMVA8iGMe75F9758ZAxkysUv2hNSKx4r3yP3kYT6cXtIBndn6Eem8LZr8Lcd3emUb67GhVUYn7dKWdFi5Tzr-WNXofV6VntrS0SGW6dPS1i909GSLAstHPCQ7mS08Hm3mAVlcjxejSTC7vZmOBrMgFSE0AaYxoMpUZKVzLgHGMrAs1BAlacIilgnOHMhEJly6RGdCOe0iSG1H6yQWB-R8Xbuqq9cWfWOWuU-xKGyJVesNVxByrWMRdejZP_S5auuyO87wSEaKac11R_E1ldaV9zVmZlXn3WufBpj5Fm5-hJtv4WYjvAudbqrbZInuL_JruANO1kCOiH9rrZQUUogv_9mDOA</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Yoon, Jae Shin</creator><creator>Yu, Zhixuan</creator><creator>Park, Jaesik</creator><creator>Park, Hyun Soo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5541-409X</orcidid><orcidid>https://orcid.org/0000-0003-0181-4869</orcidid><orcidid>https://orcid.org/0000-0001-6613-0738</orcidid><orcidid>https://orcid.org/0000-0002-3065-6962</orcidid></search><sort><creationdate>20230101</creationdate><title>HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge</title><author>Yoon, Jae Shin ; Yu, Zhixuan ; Park, Jaesik ; Park, Hyun Soo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3D geometry and appearance</topic><topic>Algorithms</topic><topic>Benchmarking</topic><topic>Benchmarks</topic><topic>Biological system modeling</topic><topic>Cameras</topic><topic>Datasets</topic><topic>Faces</topic><topic>Finite element method</topic><topic>Geometry</topic><topic>Human behavioral imaging</topic><topic>Human Body</topic><topic>Humans</topic><topic>Image reconstruction</topic><topic>Learning</topic><topic>Modelling</topic><topic>multiview dataset</topic><topic>Solid modeling</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yoon, Jae Shin</creatorcontrib><creatorcontrib>Yu, Zhixuan</creatorcontrib><creatorcontrib>Park, Jaesik</creatorcontrib><creatorcontrib>Park, Hyun Soo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yoon, Jae Shin</au><au>Yu, Zhixuan</au><au>Park, Jaesik</au><au>Park, Hyun Soo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2023-01-01</date><risdate>2023</risdate><volume>45</volume><issue>1</issue><spage>623</spage><epage>640</epage><pages>623-640</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and style. With the multiview image streams, we reconstruct the geometry of body expressions using 3D mesh models, which allows representing view-specific appearance. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI, we formulate a new benchmark challenge of a pose-guided appearance rendering task that aims to substantially extend photorealism in modeling diverse human expressions in 3D, which is the key enabling factor of authentic social tele-presence. HUMBI is publicly available at http://humbi-data.net .</abstract><cop>United States</cop><pub>IEEE</pub><pmid>34962862</pmid><doi>10.1109/TPAMI.2021.3138762</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0001-5541-409X</orcidid><orcidid>https://orcid.org/0000-0003-0181-4869</orcidid><orcidid>https://orcid.org/0000-0001-6613-0738</orcidid><orcidid>https://orcid.org/0000-0002-3065-6962</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2023-01, Vol.45 (1), p.623-640
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_crossref_primary_10_1109_TPAMI_2021_3138762
source IEEE Electronic Library (IEL) Journals
subjects 3D geometry and appearance
Algorithms
Benchmarking
Benchmarks
Biological system modeling
Cameras
Datasets
Faces
Finite element method
Geometry
Human behavioral imaging
Human Body
Humans
Image reconstruction
Learning
Modelling
multiview dataset
Solid modeling
Three dimensional models
Three-dimensional displays
title HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T14%3A15%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=HUMBI:%20A%20Large%20Multiview%20Dataset%20of%20Human%20Body%20Expressions%20and%20Benchmark%20Challenge&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Yoon,%20Jae%20Shin&rft.date=2023-01-01&rft.volume=45&rft.issue=1&rft.spage=623&rft.epage=640&rft.pages=623-640&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2021.3138762&rft_dat=%3Cproquest_cross%3E2747609929%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c351t-ec81e6f67a4dddb100f1a05917bcb070f320d14b4b24db9f36d9d71caddd9b83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2747609929&rft_id=info:pmid/34962862&rft_ieee_id=9664343&rfr_iscdi=true