Loading…

Gesture recognition of graph convolutional neural network based on spatial domain

With the iterative update of computer technology, the penetration of computer and other Internet technologies in human–computer interaction systems has become more and more extensive, and the human–computer interaction methods have quietly undergone huge changes. Gesture recognition has gradually be...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications 2023, Vol.35 (3), p.2157-2167
Main Authors: Chen, Hong, Zhao, Hongdong, Qi, Baoqiang, Zhang, Shuai, Yu, Zhanghong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the iterative update of computer technology, the penetration of computer and other Internet technologies in human–computer interaction systems has become more and more extensive, and the human–computer interaction methods have quietly undergone huge changes. Gesture recognition has gradually become a hot spot in the field of human–computer interaction now, which has a wide range of application prospects and research value. The color segmentation experiment shows that the skin color of the gesture in the YCrCg space has better clustering properties than in the YCrCb space. In the preprocessing of gesture images, an improved Otsu method is proposed to improve the real-time performance to realize the threshold segmentation of the human hand; then the morphological processing is carried out, and the median filter method is used to achieve image denoising to improve image quality. A gesture recognition algorithm is designed: First, use Graph-SAGE to recognize the graph-structured data of the gesture, and then use the Adaboost algorithm to combine the two strong classifiers of the random forest and the support vector machine into a cascade classifier through the cascade structure. The output information of Graph-SAGE is classified and the meaning of the gesture is analyzed. On the test set, the average detection accuracy of the algorithm is 91.70%, the recall rate is 94.23%, and the average detection time per frame is 330 ms.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-022-07040-8