Loading…

Trajectory-based view-invariant hand gesture recognition by fusing shape and orientation

Traditional studies in vision-based hand gesture recognition remain rooted in view-dependent representations, and hence users are forced to be fronto-parallel to the camera. To solve this problem, view-invariant gesture recognition aims to make the recognition result independent of viewpoint changes...

Full description

Saved in:
Bibliographic Details
Published in:IET computer vision 2015-12, Vol.9 (6), p.797-805
Main Authors: Wu, Xingyu, Mao, Xia, Chen, Lijiang, Xue, Yuli
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Traditional studies in vision-based hand gesture recognition remain rooted in view-dependent representations, and hence users are forced to be fronto-parallel to the camera. To solve this problem, view-invariant gesture recognition aims to make the recognition result independent of viewpoint changes. However, in current works the view-invariance is achieved at the price of mixing different gesture patterns that have similar trajectory curve shape but different semantic meanings. For example, the gesture ‘push’ can be mistaken as ‘drag’ from another viewpoint. To address this shortcoming, in this study, the authors use a shape descriptor to extract the view-invariant features of a three-dimensional (3D) trajectory. As the shape features are invariant to omnidirectional viewpoint changes, the orientation features are then added into weight different rotation angles so that similar trajectory shapes are better separated. The proposed method was conducted on two different databases, including a popular Australian Sign Language database and a challenging Kinect Hand Trajectory database. Experimental results show that the proposed algorithm achieves a higher average recognition rate than the state-of-the-art approaches, and can better distinguish confusing gestures while meeting the view-invariant condition.
ISSN:1751-9632
1751-9640
1751-9640
DOI:10.1049/iet-cvi.2014.0368