Loading…
Deformation representation based convolutional mesh autoencoder for 3D hand generation
Due to its flexible joints and self-occlusion, representation and reconstruction of 3D human hand is a very challenging problem. Although some parametric models have been proposed to alleviate this problem, these representation models have limited representation ability, like not being able to repre...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2021-07, Vol.444, p.356-365 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Due to its flexible joints and self-occlusion, representation and reconstruction of 3D human hand is a very challenging problem. Although some parametric models have been proposed to alleviate this problem, these representation models have limited representation ability, like not being able to represent complex gestures. In this paper, we presented a new 3D hand model with powerful representation ability and applied it to high accuracy monocular RGB-D/RGB 3D hand reconstruction. To achieve this, we firstly build a large scale high-quality hand mesh data set based on MANO with a novel mesh deformation method. We train a VAE based on this data set, and get the low-dimensional representation of hand meshes. By using our HandVAE model, we can recover a 3D human hand by giving a code within this latent space. We also build a framework to recover 3D hand mesh from RGB-D/RGB data. Experimental results have demonstrated the powerfulness of our hand model in terms of the reconstruction accuracy and the application for RGB-D/RGB reconstruction. We believe that our 3D hand representation could be further used in other related human hand applications. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2020.01.122 |