Loading…
SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera
We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distor...
Saved in:
Published in: | arXiv.org 2020-11 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Tome, Denis Alldieck, Thiemo Peluse, Patrick Pons-Moll, Gerard Agapito, Lourdes Badino, Hernan De la Torre, Fernando |
description | We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint. |
doi_str_mv | 10.48550/arxiv.2011.01519 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2457443369</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2457443369</sourcerecordid><originalsourceid>FETCH-LOGICAL-a529-260518d5c4f6de0c59ee97e90c1a38a4ba4759802bb9639c3996592ae606c3453</originalsourceid><addsrcrecordid>eNotjU1Lw0AUABdBsNT-AG8LnhP3623yvEmMVqgo2HvZbF4kpcnq7lb8-Sr1NDCHGcaupChNDSBuXPwev0olpCyFBIlnbKG0lkVtlLpgq5T2QghlKwWgF6x9o8PwGhLdcn3P2_fgac5x9PzP8TblcXJ5DDMfYpi442tyfaLMn8NxztTzxk0U3SU7H9wh0eqfS7Z9aLfNuti8PD41d5vCgcJCWQGy7sGbwfYkPCARVoTCS6drZzpnKsBaqK5Dq9FrRAuoHFlhvTagl-z6lP2I4fNIKe_24Rjn3-NOGaiM0dqi_gESeUqW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2457443369</pqid></control><display><type>article</type><title>SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Tome, Denis ; Alldieck, Thiemo ; Peluse, Patrick ; Pons-Moll, Gerard ; Agapito, Lourdes ; Badino, Hernan ; De la Torre, Fernando</creator><creatorcontrib>Tome, Denis ; Alldieck, Thiemo ; Peluse, Patrick ; Pons-Moll, Gerard ; Agapito, Lourdes ; Badino, Hernan ; De la Torre, Fernando</creatorcontrib><description>We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2011.01519</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cameras ; Coders ; Datasets ; Encoders-Decoders ; Ground truth ; Three dimensional bodies</subject><ispartof>arXiv.org, 2020-11</ispartof><rights>2020. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2457443369?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Tome, Denis</creatorcontrib><creatorcontrib>Alldieck, Thiemo</creatorcontrib><creatorcontrib>Peluse, Patrick</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><creatorcontrib>Agapito, Lourdes</creatorcontrib><creatorcontrib>Badino, Hernan</creatorcontrib><creatorcontrib>De la Torre, Fernando</creatorcontrib><title>SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera</title><title>arXiv.org</title><description>We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.</description><subject>Cameras</subject><subject>Coders</subject><subject>Datasets</subject><subject>Encoders-Decoders</subject><subject>Ground truth</subject><subject>Three dimensional bodies</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjU1Lw0AUABdBsNT-AG8LnhP3623yvEmMVqgo2HvZbF4kpcnq7lb8-Sr1NDCHGcaupChNDSBuXPwev0olpCyFBIlnbKG0lkVtlLpgq5T2QghlKwWgF6x9o8PwGhLdcn3P2_fgac5x9PzP8TblcXJ5DDMfYpi442tyfaLMn8NxztTzxk0U3SU7H9wh0eqfS7Z9aLfNuti8PD41d5vCgcJCWQGy7sGbwfYkPCARVoTCS6drZzpnKsBaqK5Dq9FrRAuoHFlhvTagl-z6lP2I4fNIKe_24Rjn3-NOGaiM0dqi_gESeUqW</recordid><startdate>20201102</startdate><enddate>20201102</enddate><creator>Tome, Denis</creator><creator>Alldieck, Thiemo</creator><creator>Peluse, Patrick</creator><creator>Pons-Moll, Gerard</creator><creator>Agapito, Lourdes</creator><creator>Badino, Hernan</creator><creator>De la Torre, Fernando</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201102</creationdate><title>SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera</title><author>Tome, Denis ; Alldieck, Thiemo ; Peluse, Patrick ; Pons-Moll, Gerard ; Agapito, Lourdes ; Badino, Hernan ; De la Torre, Fernando</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a529-260518d5c4f6de0c59ee97e90c1a38a4ba4759802bb9639c3996592ae606c3453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Cameras</topic><topic>Coders</topic><topic>Datasets</topic><topic>Encoders-Decoders</topic><topic>Ground truth</topic><topic>Three dimensional bodies</topic><toplevel>online_resources</toplevel><creatorcontrib>Tome, Denis</creatorcontrib><creatorcontrib>Alldieck, Thiemo</creatorcontrib><creatorcontrib>Peluse, Patrick</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><creatorcontrib>Agapito, Lourdes</creatorcontrib><creatorcontrib>Badino, Hernan</creatorcontrib><creatorcontrib>De la Torre, Fernando</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tome, Denis</au><au>Alldieck, Thiemo</au><au>Peluse, Patrick</au><au>Pons-Moll, Gerard</au><au>Agapito, Lourdes</au><au>Badino, Hernan</au><au>De la Torre, Fernando</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera</atitle><jtitle>arXiv.org</jtitle><date>2020-11-02</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2011.01519</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2457443369 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Cameras Coders Datasets Encoders-Decoders Ground truth Three dimensional bodies |
title | SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T12%3A47%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SelfPose:%203D%20Egocentric%20Pose%20Estimation%20from%20a%20Headset%20Mounted%20Camera&rft.jtitle=arXiv.org&rft.au=Tome,%20Denis&rft.date=2020-11-02&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2011.01519&rft_dat=%3Cproquest%3E2457443369%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a529-260518d5c4f6de0c59ee97e90c1a38a4ba4759802bb9639c3996592ae606c3453%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2457443369&rft_id=info:pmid/&rfr_iscdi=true |