Loading…
Full-body motion capture for multiple closely interacting persons
[Display omitted] Human shape and pose estimation is a popular but challenging problem, especially when asked to capture the body, hands, feet and face jointly for multiple persons with close interaction. Existing methods can only have a total motion capture of a single person or multiple persons wi...
Saved in:
Published in: | Graphical models 2020-07, Vol.110, p.101072, Article 101072 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3 |
---|---|
cites | cdi_FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3 |
container_end_page | |
container_issue | |
container_start_page | 101072 |
container_title | Graphical models |
container_volume | 110 |
creator | Li, Kun Mao, Yali Liu, Yunke Shao, Ruizhi Liu, Yebin |
description | [Display omitted]
Human shape and pose estimation is a popular but challenging problem, especially when asked to capture the body, hands, feet and face jointly for multiple persons with close interaction. Existing methods can only have a total motion capture of a single person or multiple persons without close interaction. In this paper, we present a fully automatic and effective method to capture full-body human performance including body poses, face poses, hand gestures, and feet orientations for closely interacting multiple persons. We predict 2D keypoints corresponding to the poses of body, face, hands and feet for each person, and associate the same person in multi-view videos by computing personalized appearance descriptors to reduce ambiguities and uncertainties. To deal with occlusions and obtain temporally coherent human shapes, we estimate shape and pose for each person with the spatio-temporal tracking and constraints. Experimental results demonstrate that our method achieves better performance than state-of-the-art methods. |
doi_str_mv | 10.1016/j.gmod.2020.101072 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_gmod_2020_101072</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1524070320300163</els_id><sourcerecordid>S1524070320300163</sourcerecordid><originalsourceid>FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3</originalsourceid><addsrcrecordid>eNp9kMFKAzEURYMoWKs_4Co_MPUlmUlTcFOKtULBja7DJHkpKTOTIUmF_r2tFZeu3uXBuVwOIY8MZgyYfNrPdn10Mw785wFzfkUmrOF1BXPGrv8yiFtyl_MegLGm5hOyXB-6rjLRHWkfS4gDte1YDgmpj4n2h66EsUNqu5ixO9IwFEytLWHY0RFTjkO-Jze-7TI-_N4p-Vy_fKw21fb99W213FZWAJRqYWrjUXDVKMUW6LxEKaxqTQNGNbVHkKJFIWtnrK9h3gorufIglVFOLryYEn7ptSnmnNDrMYW-TUfNQJ8l6L0-S9BnCfoi4QQ9XyA8LfsKmHS2AQeLLiS0RbsY_sO_AaAmZnE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Full-body motion capture for multiple closely interacting persons</title><source>ScienceDirect Freedom Collection</source><creator>Li, Kun ; Mao, Yali ; Liu, Yunke ; Shao, Ruizhi ; Liu, Yebin</creator><creatorcontrib>Li, Kun ; Mao, Yali ; Liu, Yunke ; Shao, Ruizhi ; Liu, Yebin</creatorcontrib><description>[Display omitted]
Human shape and pose estimation is a popular but challenging problem, especially when asked to capture the body, hands, feet and face jointly for multiple persons with close interaction. Existing methods can only have a total motion capture of a single person or multiple persons without close interaction. In this paper, we present a fully automatic and effective method to capture full-body human performance including body poses, face poses, hand gestures, and feet orientations for closely interacting multiple persons. We predict 2D keypoints corresponding to the poses of body, face, hands and feet for each person, and associate the same person in multi-view videos by computing personalized appearance descriptors to reduce ambiguities and uncertainties. To deal with occlusions and obtain temporally coherent human shapes, we estimate shape and pose for each person with the spatio-temporal tracking and constraints. Experimental results demonstrate that our method achieves better performance than state-of-the-art methods.</description><identifier>ISSN: 1524-0703</identifier><identifier>EISSN: 1524-0711</identifier><identifier>DOI: 10.1016/j.gmod.2020.101072</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Close interaction ; Motion capture ; Multiple persons ; Occlusions ; Spatio-temporal constraints</subject><ispartof>Graphical models, 2020-07, Vol.110, p.101072, Article 101072</ispartof><rights>2020 Elsevier Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3</citedby><cites>FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3</cites><orcidid>0000-0003-2326-0166</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids></links><search><creatorcontrib>Li, Kun</creatorcontrib><creatorcontrib>Mao, Yali</creatorcontrib><creatorcontrib>Liu, Yunke</creatorcontrib><creatorcontrib>Shao, Ruizhi</creatorcontrib><creatorcontrib>Liu, Yebin</creatorcontrib><title>Full-body motion capture for multiple closely interacting persons</title><title>Graphical models</title><description>[Display omitted]
Human shape and pose estimation is a popular but challenging problem, especially when asked to capture the body, hands, feet and face jointly for multiple persons with close interaction. Existing methods can only have a total motion capture of a single person or multiple persons without close interaction. In this paper, we present a fully automatic and effective method to capture full-body human performance including body poses, face poses, hand gestures, and feet orientations for closely interacting multiple persons. We predict 2D keypoints corresponding to the poses of body, face, hands and feet for each person, and associate the same person in multi-view videos by computing personalized appearance descriptors to reduce ambiguities and uncertainties. To deal with occlusions and obtain temporally coherent human shapes, we estimate shape and pose for each person with the spatio-temporal tracking and constraints. Experimental results demonstrate that our method achieves better performance than state-of-the-art methods.</description><subject>Close interaction</subject><subject>Motion capture</subject><subject>Multiple persons</subject><subject>Occlusions</subject><subject>Spatio-temporal constraints</subject><issn>1524-0703</issn><issn>1524-0711</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kMFKAzEURYMoWKs_4Co_MPUlmUlTcFOKtULBja7DJHkpKTOTIUmF_r2tFZeu3uXBuVwOIY8MZgyYfNrPdn10Mw785wFzfkUmrOF1BXPGrv8yiFtyl_MegLGm5hOyXB-6rjLRHWkfS4gDte1YDgmpj4n2h66EsUNqu5ixO9IwFEytLWHY0RFTjkO-Jze-7TI-_N4p-Vy_fKw21fb99W213FZWAJRqYWrjUXDVKMUW6LxEKaxqTQNGNbVHkKJFIWtnrK9h3gorufIglVFOLryYEn7ptSnmnNDrMYW-TUfNQJ8l6L0-S9BnCfoi4QQ9XyA8LfsKmHS2AQeLLiS0RbsY_sO_AaAmZnE</recordid><startdate>202007</startdate><enddate>202007</enddate><creator>Li, Kun</creator><creator>Mao, Yali</creator><creator>Liu, Yunke</creator><creator>Shao, Ruizhi</creator><creator>Liu, Yebin</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-2326-0166</orcidid></search><sort><creationdate>202007</creationdate><title>Full-body motion capture for multiple closely interacting persons</title><author>Li, Kun ; Mao, Yali ; Liu, Yunke ; Shao, Ruizhi ; Liu, Yebin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Close interaction</topic><topic>Motion capture</topic><topic>Multiple persons</topic><topic>Occlusions</topic><topic>Spatio-temporal constraints</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Kun</creatorcontrib><creatorcontrib>Mao, Yali</creatorcontrib><creatorcontrib>Liu, Yunke</creatorcontrib><creatorcontrib>Shao, Ruizhi</creatorcontrib><creatorcontrib>Liu, Yebin</creatorcontrib><collection>CrossRef</collection><jtitle>Graphical models</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Kun</au><au>Mao, Yali</au><au>Liu, Yunke</au><au>Shao, Ruizhi</au><au>Liu, Yebin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Full-body motion capture for multiple closely interacting persons</atitle><jtitle>Graphical models</jtitle><date>2020-07</date><risdate>2020</risdate><volume>110</volume><spage>101072</spage><pages>101072-</pages><artnum>101072</artnum><issn>1524-0703</issn><eissn>1524-0711</eissn><abstract>[Display omitted]
Human shape and pose estimation is a popular but challenging problem, especially when asked to capture the body, hands, feet and face jointly for multiple persons with close interaction. Existing methods can only have a total motion capture of a single person or multiple persons without close interaction. In this paper, we present a fully automatic and effective method to capture full-body human performance including body poses, face poses, hand gestures, and feet orientations for closely interacting multiple persons. We predict 2D keypoints corresponding to the poses of body, face, hands and feet for each person, and associate the same person in multi-view videos by computing personalized appearance descriptors to reduce ambiguities and uncertainties. To deal with occlusions and obtain temporally coherent human shapes, we estimate shape and pose for each person with the spatio-temporal tracking and constraints. Experimental results demonstrate that our method achieves better performance than state-of-the-art methods.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.gmod.2020.101072</doi><orcidid>https://orcid.org/0000-0003-2326-0166</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1524-0703 |
ispartof | Graphical models, 2020-07, Vol.110, p.101072, Article 101072 |
issn | 1524-0703 1524-0711 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_gmod_2020_101072 |
source | ScienceDirect Freedom Collection |
subjects | Close interaction Motion capture Multiple persons Occlusions Spatio-temporal constraints |
title | Full-body motion capture for multiple closely interacting persons |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T23%3A21%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Full-body%20motion%20capture%20for%20multiple%20closely%20interacting%20persons&rft.jtitle=Graphical%20models&rft.au=Li,%20Kun&rft.date=2020-07&rft.volume=110&rft.spage=101072&rft.pages=101072-&rft.artnum=101072&rft.issn=1524-0703&rft.eissn=1524-0711&rft_id=info:doi/10.1016/j.gmod.2020.101072&rft_dat=%3Celsevier_cross%3ES1524070320300163%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c300t-9b4bfe32858819edf6e63c8ab50b854fe063ae364dbcf407a3c628f068b8d69f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |