Loading…

XNect: real-time multi-person 3D motion capture with a single RGB camera

We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural net...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on graphics 2020-07, Vol.39 (4), p.82:1-82:17, Article 82
Main Authors: Mehta, Dushyant, Sotnychenko, Oleksandr, Mueller, Franziska, Xu, Weipeng, Elgharib, Mohamed, Fua, Pascal, Seidel, Hans-Peter, Rhodin, Helge, Pons-Moll, Gerard, Theobalt, Christian
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-a214t-330a5255761be52551934e7c5a93c66f50b7a678cc646a1a77faa3dcc3fbaf473
container_end_page 82:17
container_issue 4
container_start_page 82:1
container_title ACM transactions on graphics
container_volume 39
creator Mehta, Dushyant
Sotnychenko, Oleksandr
Mueller, Franziska
Xu, Weipeng
Elgharib, Mohamed
Fua, Pascal
Seidel, Hans-Peter
Rhodin, Helge
Pons-Moll, Gerard
Theobalt, Christian
description We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fullyconnected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.
doi_str_mv 10.1145/3386569.3392410
format article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3386569_3392410</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3392410</sourcerecordid><originalsourceid>FETCH-LOGICAL-a214t-330a5255761be52551934e7c5a93c66f50b7a678cc646a1a77faa3dcc3fbaf473</originalsourceid><addsrcrecordid>eNo9j8tKA0EQRRsx4BhdC_5DJ1VT3VUzSwk-AkE3Cu6amrIbFIMynY1_b0JGV_fCfcBx7gphgRjikqjjyP2CqG8DwolrMEbxQtydugaEwAMBnrnzWj8AgEPgxs1eH7PtLtys6GfNl5PO3cvd7fPqwW-e7term43XFsPOE4HGdv_KOOSDwZ5CFovakzGXCIMoS2fGgRVVpKjSmxmVQUsQmrvl8dfGr1rHXNL3-L7V8SchpANFmijSRLFfXB8Xatv_8l_4C5QtP08</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>XNect: real-time multi-person 3D motion capture with a single RGB camera</title><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Mehta, Dushyant ; Sotnychenko, Oleksandr ; Mueller, Franziska ; Xu, Weipeng ; Elgharib, Mohamed ; Fua, Pascal ; Seidel, Hans-Peter ; Rhodin, Helge ; Pons-Moll, Gerard ; Theobalt, Christian</creator><creatorcontrib>Mehta, Dushyant ; Sotnychenko, Oleksandr ; Mueller, Franziska ; Xu, Weipeng ; Elgharib, Mohamed ; Fua, Pascal ; Seidel, Hans-Peter ; Rhodin, Helge ; Pons-Moll, Gerard ; Theobalt, Christian</creatorcontrib><description>We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fullyconnected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.</description><identifier>ISSN: 0730-0301</identifier><identifier>EISSN: 1557-7368</identifier><identifier>DOI: 10.1145/3386569.3392410</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Animation ; Artificial intelligence ; Computer graphics ; Computer vision ; Computing methodologies ; Machine learning ; Machine learning approaches ; Motion capture ; Neural networks</subject><ispartof>ACM transactions on graphics, 2020-07, Vol.39 (4), p.82:1-82:17, Article 82</ispartof><rights>ACM</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a214t-330a5255761be52551934e7c5a93c66f50b7a678cc646a1a77faa3dcc3fbaf473</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Mehta, Dushyant</creatorcontrib><creatorcontrib>Sotnychenko, Oleksandr</creatorcontrib><creatorcontrib>Mueller, Franziska</creatorcontrib><creatorcontrib>Xu, Weipeng</creatorcontrib><creatorcontrib>Elgharib, Mohamed</creatorcontrib><creatorcontrib>Fua, Pascal</creatorcontrib><creatorcontrib>Seidel, Hans-Peter</creatorcontrib><creatorcontrib>Rhodin, Helge</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><creatorcontrib>Theobalt, Christian</creatorcontrib><title>XNect: real-time multi-person 3D motion capture with a single RGB camera</title><title>ACM transactions on graphics</title><addtitle>ACM TOG</addtitle><description>We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fullyconnected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.</description><subject>Animation</subject><subject>Artificial intelligence</subject><subject>Computer graphics</subject><subject>Computer vision</subject><subject>Computing methodologies</subject><subject>Machine learning</subject><subject>Machine learning approaches</subject><subject>Motion capture</subject><subject>Neural networks</subject><issn>0730-0301</issn><issn>1557-7368</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNo9j8tKA0EQRRsx4BhdC_5DJ1VT3VUzSwk-AkE3Cu6amrIbFIMynY1_b0JGV_fCfcBx7gphgRjikqjjyP2CqG8DwolrMEbxQtydugaEwAMBnrnzWj8AgEPgxs1eH7PtLtys6GfNl5PO3cvd7fPqwW-e7term43XFsPOE4HGdv_KOOSDwZ5CFovakzGXCIMoS2fGgRVVpKjSmxmVQUsQmrvl8dfGr1rHXNL3-L7V8SchpANFmijSRLFfXB8Xatv_8l_4C5QtP08</recordid><startdate>20200708</startdate><enddate>20200708</enddate><creator>Mehta, Dushyant</creator><creator>Sotnychenko, Oleksandr</creator><creator>Mueller, Franziska</creator><creator>Xu, Weipeng</creator><creator>Elgharib, Mohamed</creator><creator>Fua, Pascal</creator><creator>Seidel, Hans-Peter</creator><creator>Rhodin, Helge</creator><creator>Pons-Moll, Gerard</creator><creator>Theobalt, Christian</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20200708</creationdate><title>XNect</title><author>Mehta, Dushyant ; Sotnychenko, Oleksandr ; Mueller, Franziska ; Xu, Weipeng ; Elgharib, Mohamed ; Fua, Pascal ; Seidel, Hans-Peter ; Rhodin, Helge ; Pons-Moll, Gerard ; Theobalt, Christian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a214t-330a5255761be52551934e7c5a93c66f50b7a678cc646a1a77faa3dcc3fbaf473</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Animation</topic><topic>Artificial intelligence</topic><topic>Computer graphics</topic><topic>Computer vision</topic><topic>Computing methodologies</topic><topic>Machine learning</topic><topic>Machine learning approaches</topic><topic>Motion capture</topic><topic>Neural networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mehta, Dushyant</creatorcontrib><creatorcontrib>Sotnychenko, Oleksandr</creatorcontrib><creatorcontrib>Mueller, Franziska</creatorcontrib><creatorcontrib>Xu, Weipeng</creatorcontrib><creatorcontrib>Elgharib, Mohamed</creatorcontrib><creatorcontrib>Fua, Pascal</creatorcontrib><creatorcontrib>Seidel, Hans-Peter</creatorcontrib><creatorcontrib>Rhodin, Helge</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><creatorcontrib>Theobalt, Christian</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mehta, Dushyant</au><au>Sotnychenko, Oleksandr</au><au>Mueller, Franziska</au><au>Xu, Weipeng</au><au>Elgharib, Mohamed</au><au>Fua, Pascal</au><au>Seidel, Hans-Peter</au><au>Rhodin, Helge</au><au>Pons-Moll, Gerard</au><au>Theobalt, Christian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>XNect: real-time multi-person 3D motion capture with a single RGB camera</atitle><jtitle>ACM transactions on graphics</jtitle><stitle>ACM TOG</stitle><date>2020-07-08</date><risdate>2020</risdate><volume>39</volume><issue>4</issue><spage>82:1</spage><epage>82:17</epage><pages>82:1-82:17</pages><artnum>82</artnum><issn>0730-0301</issn><eissn>1557-7368</eissn><abstract>We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fullyconnected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3386569.3392410</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0730-0301
ispartof ACM transactions on graphics, 2020-07, Vol.39 (4), p.82:1-82:17, Article 82
issn 0730-0301
1557-7368
language eng
recordid cdi_crossref_primary_10_1145_3386569_3392410
source Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
subjects Animation
Artificial intelligence
Computer graphics
Computer vision
Computing methodologies
Machine learning
Machine learning approaches
Motion capture
Neural networks
title XNect: real-time multi-person 3D motion capture with a single RGB camera
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T08%3A53%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=XNect:%20real-time%20multi-person%203D%20motion%20capture%20with%20a%20single%20RGB%20camera&rft.jtitle=ACM%20transactions%20on%20graphics&rft.au=Mehta,%20Dushyant&rft.date=2020-07-08&rft.volume=39&rft.issue=4&rft.spage=82:1&rft.epage=82:17&rft.pages=82:1-82:17&rft.artnum=82&rft.issn=0730-0301&rft.eissn=1557-7368&rft_id=info:doi/10.1145/3386569.3392410&rft_dat=%3Cacm_cross%3E3392410%3C/acm_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a214t-330a5255761be52551934e7c5a93c66f50b7a678cc646a1a77faa3dcc3fbaf473%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true