Loading…
Crowd3D++: Robust Monocular Crowd Reconstruction with Upright Space
This paper aims to reconstruct hundreds of people's 3D poses, shapes, and locations from a single image with unknown camera parameters. Due to the small and highly varying 2D human scales, depth ambiguity, and perspective distortion, no existing methods can achieve globally consistent reconstru...
Saved in:
Published in: | arXiv.org 2024-11 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Huang, Jing Wen, Hao Zhou, Tianyi Lin, Haozhe Yu-Kun, Lai Li, Kun |
description | This paper aims to reconstruct hundreds of people's 3D poses, shapes, and locations from a single image with unknown camera parameters. Due to the small and highly varying 2D human scales, depth ambiguity, and perspective distortion, no existing methods can achieve globally consistent reconstruction and accurate reprojection. To address these challenges, we first propose Crowd3D, which leverages a new concept, Human-scene Virtual Interaction Point (HVIP), to convert the complex 3D human localization into 2D-pixel localization with robust camera and ground estimation to achieve globally consistent reconstruction. To achieve stable generalization on different camera FoVs without test-time optimization, we propose an extended version, Crowd3D++, which eliminates the influence of camera parameters and the cropping operation by the proposed canonical upright space and ground-aware normalization transform. In the defined upright space, Crowd3D++ also designs an HVIPNet to regress 2D HVIP and infer the depths. Besides, we contribute two benchmark datasets, LargeCrowd and SyntheticCrowd, for evaluating crowd reconstruction in large scenes. The source code and data will be made publicly available after acceptance. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3127419322</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3127419322</sourcerecordid><originalsourceid>FETCH-proquest_journals_31274193223</originalsourceid><addsrcrecordid>eNqNys0KgkAUQOEhCJLyHS60FEHvaFZbK9q0sVrLNE2pyFybH3z9InqAVmfxnQkLkPM0XmeIMxZa2yVJgqsC85wHrCwNjXe-i6ItVHTz1sGJNEnfCwNfg0pJ0tYZL11LGsbWNXAdTPtsHJwHIdWCTR-ityr8dc6Wh_2lPMaDoZdX1tUdeaM_VPMUiyzdcET-3_UGm-w5jQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3127419322</pqid></control><display><type>article</type><title>Crowd3D++: Robust Monocular Crowd Reconstruction with Upright Space</title><source>Publicly Available Content (ProQuest)</source><creator>Huang, Jing ; Wen, Hao ; Zhou, Tianyi ; Lin, Haozhe ; Yu-Kun, Lai ; Li, Kun</creator><creatorcontrib>Huang, Jing ; Wen, Hao ; Zhou, Tianyi ; Lin, Haozhe ; Yu-Kun, Lai ; Li, Kun</creatorcontrib><description>This paper aims to reconstruct hundreds of people's 3D poses, shapes, and locations from a single image with unknown camera parameters. Due to the small and highly varying 2D human scales, depth ambiguity, and perspective distortion, no existing methods can achieve globally consistent reconstruction and accurate reprojection. To address these challenges, we first propose Crowd3D, which leverages a new concept, Human-scene Virtual Interaction Point (HVIP), to convert the complex 3D human localization into 2D-pixel localization with robust camera and ground estimation to achieve globally consistent reconstruction. To achieve stable generalization on different camera FoVs without test-time optimization, we propose an extended version, Crowd3D++, which eliminates the influence of camera parameters and the cropping operation by the proposed canonical upright space and ground-aware normalization transform. In the defined upright space, Crowd3D++ also designs an HVIPNet to regress 2D HVIP and infer the depths. Besides, we contribute two benchmark datasets, LargeCrowd and SyntheticCrowd, for evaluating crowd reconstruction in large scenes. The source code and data will be made publicly available after acceptance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cameras ; Image reconstruction ; Localization ; Parameters ; Robustness ; Source code ; Virtual reality</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3127419322?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Huang, Jing</creatorcontrib><creatorcontrib>Wen, Hao</creatorcontrib><creatorcontrib>Zhou, Tianyi</creatorcontrib><creatorcontrib>Lin, Haozhe</creatorcontrib><creatorcontrib>Yu-Kun, Lai</creatorcontrib><creatorcontrib>Li, Kun</creatorcontrib><title>Crowd3D++: Robust Monocular Crowd Reconstruction with Upright Space</title><title>arXiv.org</title><description>This paper aims to reconstruct hundreds of people's 3D poses, shapes, and locations from a single image with unknown camera parameters. Due to the small and highly varying 2D human scales, depth ambiguity, and perspective distortion, no existing methods can achieve globally consistent reconstruction and accurate reprojection. To address these challenges, we first propose Crowd3D, which leverages a new concept, Human-scene Virtual Interaction Point (HVIP), to convert the complex 3D human localization into 2D-pixel localization with robust camera and ground estimation to achieve globally consistent reconstruction. To achieve stable generalization on different camera FoVs without test-time optimization, we propose an extended version, Crowd3D++, which eliminates the influence of camera parameters and the cropping operation by the proposed canonical upright space and ground-aware normalization transform. In the defined upright space, Crowd3D++ also designs an HVIPNet to regress 2D HVIP and infer the depths. Besides, we contribute two benchmark datasets, LargeCrowd and SyntheticCrowd, for evaluating crowd reconstruction in large scenes. The source code and data will be made publicly available after acceptance.</description><subject>Cameras</subject><subject>Image reconstruction</subject><subject>Localization</subject><subject>Parameters</subject><subject>Robustness</subject><subject>Source code</subject><subject>Virtual reality</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNys0KgkAUQOEhCJLyHS60FEHvaFZbK9q0sVrLNE2pyFybH3z9InqAVmfxnQkLkPM0XmeIMxZa2yVJgqsC85wHrCwNjXe-i6ItVHTz1sGJNEnfCwNfg0pJ0tYZL11LGsbWNXAdTPtsHJwHIdWCTR-ityr8dc6Wh_2lPMaDoZdX1tUdeaM_VPMUiyzdcET-3_UGm-w5jQ</recordid><startdate>20241109</startdate><enddate>20241109</enddate><creator>Huang, Jing</creator><creator>Wen, Hao</creator><creator>Zhou, Tianyi</creator><creator>Lin, Haozhe</creator><creator>Yu-Kun, Lai</creator><creator>Li, Kun</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241109</creationdate><title>Crowd3D++: Robust Monocular Crowd Reconstruction with Upright Space</title><author>Huang, Jing ; Wen, Hao ; Zhou, Tianyi ; Lin, Haozhe ; Yu-Kun, Lai ; Li, Kun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31274193223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cameras</topic><topic>Image reconstruction</topic><topic>Localization</topic><topic>Parameters</topic><topic>Robustness</topic><topic>Source code</topic><topic>Virtual reality</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Jing</creatorcontrib><creatorcontrib>Wen, Hao</creatorcontrib><creatorcontrib>Zhou, Tianyi</creatorcontrib><creatorcontrib>Lin, Haozhe</creatorcontrib><creatorcontrib>Yu-Kun, Lai</creatorcontrib><creatorcontrib>Li, Kun</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huang, Jing</au><au>Wen, Hao</au><au>Zhou, Tianyi</au><au>Lin, Haozhe</au><au>Yu-Kun, Lai</au><au>Li, Kun</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Crowd3D++: Robust Monocular Crowd Reconstruction with Upright Space</atitle><jtitle>arXiv.org</jtitle><date>2024-11-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>This paper aims to reconstruct hundreds of people's 3D poses, shapes, and locations from a single image with unknown camera parameters. Due to the small and highly varying 2D human scales, depth ambiguity, and perspective distortion, no existing methods can achieve globally consistent reconstruction and accurate reprojection. To address these challenges, we first propose Crowd3D, which leverages a new concept, Human-scene Virtual Interaction Point (HVIP), to convert the complex 3D human localization into 2D-pixel localization with robust camera and ground estimation to achieve globally consistent reconstruction. To achieve stable generalization on different camera FoVs without test-time optimization, we propose an extended version, Crowd3D++, which eliminates the influence of camera parameters and the cropping operation by the proposed canonical upright space and ground-aware normalization transform. In the defined upright space, Crowd3D++ also designs an HVIPNet to regress 2D HVIP and infer the depths. Besides, we contribute two benchmark datasets, LargeCrowd and SyntheticCrowd, for evaluating crowd reconstruction in large scenes. The source code and data will be made publicly available after acceptance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3127419322 |
source | Publicly Available Content (ProQuest) |
subjects | Cameras Image reconstruction Localization Parameters Robustness Source code Virtual reality |
title | Crowd3D++: Robust Monocular Crowd Reconstruction with Upright Space |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T19%3A08%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Crowd3D++:%20Robust%20Monocular%20Crowd%20Reconstruction%20with%20Upright%20Space&rft.jtitle=arXiv.org&rft.au=Huang,%20Jing&rft.date=2024-11-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3127419322%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31274193223%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3127419322&rft_id=info:pmid/&rfr_iscdi=true |