Loading…
Simple very deep convolutional network for robust hand pose regression from a single depth image
•A dedicate network structure is presented to regress 3D hand pose from a single depth image.•We discuss the effect of the ConvNet depth on its accuracy under the hand pose regression setting.•We introduce batch normalization and a low-dimensional embedding to help hand pose estimation.•The proposed...
Saved in:
Published in: | Pattern recognition letters 2019-03, Vol.119, p.205-213 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •A dedicate network structure is presented to regress 3D hand pose from a single depth image.•We discuss the effect of the ConvNet depth on its accuracy under the hand pose regression setting.•We introduce batch normalization and a low-dimensional embedding to help hand pose estimation.•The proposed system is efficient enough with more than 500 fps on a single GPU.•Experiments results show that our method can get competitive results to state-of-the-art methods.
We propose a novel approach for articulated hand pose estimation from a single depth image using a very deep convolutional network. For the first, a very deep network structure is designed to directly maps a single depth image to its corresponding 3D hand joint locations. This approach eliminates the necessity of hand-crafted intermediate features and sophisticated post-processing stages for robust and accurate hand pose estimation. We use Batch Normalization to accelerate training and prevent the objective function from getting stuck in poor local minima. We introduce a low-dimensional embedding forcing the network to learn the inherent constraints of hand joints, which helps to reduce the cost of reconstructing 3D hand poses from high-dimension feature space. We discuss the effect of the convolutional network depth on its accuracy under the hand pose regression setting. Quantitative assessments on two challenging datasets show that our proposed method gets competitive results to state-of-the-art approaches in terms of accuracy. Moreover, qualitative results also show that our proposed method is robust to some difficult hand poses. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2017.10.019 |