Loading…

Predicting high-fidelity human body models from impaired point clouds

•A method was proposed for 3D shape reconstruction from impaired point clouds.•Misaligned point clouds were unusable data but we fixed it.•A mini survey on impairments in 3D human body scans was proposed. Accurate 3-D models of human subjects are widely used in domains such as fashion design, non-co...

Full description

Saved in:
Bibliographic Details
Published in:Signal processing 2022-03, Vol.192, p.108375, Article 108375
Main Authors: Hu, Pengpeng, Zhao, Ran, Dai, Xinxin, Munteanu, Adrian
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A method was proposed for 3D shape reconstruction from impaired point clouds.•Misaligned point clouds were unusable data but we fixed it.•A mini survey on impairments in 3D human body scans was proposed. Accurate 3-D models of human subjects are widely used in domains such as fashion design, non-contact body biometrics, computer animation, gaming, AR/VR, to cite a few. For these kinds of applications, a high-fidelity human body mesh in a canonical posture (e.g. pose or pose) is necessary. This paper proposes a deep learning approach to jointly reconstruct a clean, watertight body mesh and to normalize the posture of the human body model starting from an input set of impaired body point clouds. The proposed method, dubbed Impaired-to-High-fidelity human body network (I2H) is, to the best of our knowledge, the first deep learning approach in the literature that addresses these problems. The proposed method follows an Encoder-Decoder design. The Encoder directly takes the impaired point clouds (e.g. containing noise, occlusions and misalignments) as input without making any structural assumptions about the input. The Decoder interprets the latent feature and produces a high-fidelity T-pose body mesh. We compare the proposed approach against existing state-of-the-art methods through various experiments and show that our method achieves the best performance on both synthetic and scanned datasets for 3D human mesh reconstruction.
ISSN:0165-1684
1872-7557
DOI:10.1016/j.sigpro.2021.108375