Loading…
Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion
A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combi...
Saved in:
Published in: | IEEE sensors journal 2022-04, Vol.22 (7), p.6793-6805 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443 |
---|---|
cites | cdi_FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443 |
container_end_page | 6805 |
container_issue | 7 |
container_start_page | 6793 |
container_title | IEEE sensors journal |
container_volume | 22 |
creator | Xue, Yaxu Yu, Yadong Yin, Kaiyang Li, Pengfei Xie, Shuangxi Ju, Zhaojie |
description | A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combined with the characteristics of HIM capture, ten kinds of HIM sets are designed, and finger trajectory, contact force and electromyographic signal data are acquired synchronously through the multi-modal data acquisition platform; second, motion segmentation is realized through the threshold segmentation method, the multi-modal signal preprocessing is realized by Empirical Mode Decomposition (EMD), and multi-modal signal feature extraction is realized by Maximum Lyapunov Exponent (MLE); then, a detailed non-linear data analysis is carried out. A detailed analysis and discussion are presented from the results of the Random Forest (RF) recognizing HIMs, the comparison results of motion recognition rates of different subjects, the comparison results of motion recognition rates of different perceptrons, and the comparison results of the motion recognition rates of different machine learning methods. The experimental results show that the multi-modal perception information based HIM recognition system proposed in this paper can effectively recognize ten different HIMs, with an accuracy rate of 93.72%. |
doi_str_mv | 10.1109/JSEN.2022.3148992 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9703362</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9703362</ieee_id><sourcerecordid>2645986876</sourcerecordid><originalsourceid>FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443</originalsourceid><addsrcrecordid>eNo9kMtKw0AUhgdRsFYfQNwEXKfO_bLUYm2lVbEK7obJ5ERS2kydSRa-vekFV-eD8__nwIfQNcEjQrC5e14-vowopnTECNfG0BM0IELonCiuT3fMcM6Z-jpHFymtMCZGCTVAy2m3cU02a_Kpa8psEdo6NNk7-PDd1Ht-cAnKrIdFt27rfBFKt87eIHrY7vezpgpx4_Y86VI_LtFZ5dYJro5ziD4njx_jaT5_fZqN7-e5p5S1OTcYiqKSynslBS9FBQWlxumCVFpjLSUWUAElEpTyppBamcL5UhAolOCcDdHt4e42hp8OUmtXoYtN_9JSyYXRfUP2KXJI-RhSilDZbaw3Lv5agu3Ond25szt39uiu79wcOjUA_OeNwoxJyv4AYTpqiA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2645986876</pqid></control><display><type>article</type><title>Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Xue, Yaxu ; Yu, Yadong ; Yin, Kaiyang ; Li, Pengfei ; Xie, Shuangxi ; Ju, Zhaojie</creator><creatorcontrib>Xue, Yaxu ; Yu, Yadong ; Yin, Kaiyang ; Li, Pengfei ; Xie, Shuangxi ; Ju, Zhaojie</creatorcontrib><description>A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combined with the characteristics of HIM capture, ten kinds of HIM sets are designed, and finger trajectory, contact force and electromyographic signal data are acquired synchronously through the multi-modal data acquisition platform; second, motion segmentation is realized through the threshold segmentation method, the multi-modal signal preprocessing is realized by Empirical Mode Decomposition (EMD), and multi-modal signal feature extraction is realized by Maximum Lyapunov Exponent (MLE); then, a detailed non-linear data analysis is carried out. A detailed analysis and discussion are presented from the results of the Random Forest (RF) recognizing HIMs, the comparison results of motion recognition rates of different subjects, the comparison results of motion recognition rates of different perceptrons, and the comparison results of the motion recognition rates of different machine learning methods. The experimental results show that the multi-modal perception information based HIM recognition system proposed in this paper can effectively recognize ten different HIMs, with an accuracy rate of 93.72%.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2022.3148992</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Character recognition ; Contact force ; Data acquisition ; Data analysis ; Data integration ; Empirical analysis ; empirical mode decomposition ; Feature extraction ; Force ; Gesture recognition ; human in-hand motion ; Human motion ; Liapunov exponents ; Machine learning ; maximum Lyapunov exponent ; Modal data ; Motion perception ; Multi-modal information ; random forest ; Recognition ; Segmentation ; Sensor phenomena and characterization ; Sensors ; Wavelet transforms</subject><ispartof>IEEE sensors journal, 2022-04, Vol.22 (7), p.6793-6805</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443</citedby><cites>FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443</cites><orcidid>0000-0003-1718-9551 ; 0000-0001-9966-1324 ; 0000-0002-9218-4251 ; 0000-0003-4161-233X ; 0000-0002-9524-7609 ; 0000-0002-6721-2337</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9703362$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Xue, Yaxu</creatorcontrib><creatorcontrib>Yu, Yadong</creatorcontrib><creatorcontrib>Yin, Kaiyang</creatorcontrib><creatorcontrib>Li, Pengfei</creatorcontrib><creatorcontrib>Xie, Shuangxi</creatorcontrib><creatorcontrib>Ju, Zhaojie</creatorcontrib><title>Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combined with the characteristics of HIM capture, ten kinds of HIM sets are designed, and finger trajectory, contact force and electromyographic signal data are acquired synchronously through the multi-modal data acquisition platform; second, motion segmentation is realized through the threshold segmentation method, the multi-modal signal preprocessing is realized by Empirical Mode Decomposition (EMD), and multi-modal signal feature extraction is realized by Maximum Lyapunov Exponent (MLE); then, a detailed non-linear data analysis is carried out. A detailed analysis and discussion are presented from the results of the Random Forest (RF) recognizing HIMs, the comparison results of motion recognition rates of different subjects, the comparison results of motion recognition rates of different perceptrons, and the comparison results of the motion recognition rates of different machine learning methods. The experimental results show that the multi-modal perception information based HIM recognition system proposed in this paper can effectively recognize ten different HIMs, with an accuracy rate of 93.72%.</description><subject>Character recognition</subject><subject>Contact force</subject><subject>Data acquisition</subject><subject>Data analysis</subject><subject>Data integration</subject><subject>Empirical analysis</subject><subject>empirical mode decomposition</subject><subject>Feature extraction</subject><subject>Force</subject><subject>Gesture recognition</subject><subject>human in-hand motion</subject><subject>Human motion</subject><subject>Liapunov exponents</subject><subject>Machine learning</subject><subject>maximum Lyapunov exponent</subject><subject>Modal data</subject><subject>Motion perception</subject><subject>Multi-modal information</subject><subject>random forest</subject><subject>Recognition</subject><subject>Segmentation</subject><subject>Sensor phenomena and characterization</subject><subject>Sensors</subject><subject>Wavelet transforms</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kMtKw0AUhgdRsFYfQNwEXKfO_bLUYm2lVbEK7obJ5ERS2kydSRa-vekFV-eD8__nwIfQNcEjQrC5e14-vowopnTECNfG0BM0IELonCiuT3fMcM6Z-jpHFymtMCZGCTVAy2m3cU02a_Kpa8psEdo6NNk7-PDd1Ht-cAnKrIdFt27rfBFKt87eIHrY7vezpgpx4_Y86VI_LtFZ5dYJro5ziD4njx_jaT5_fZqN7-e5p5S1OTcYiqKSynslBS9FBQWlxumCVFpjLSUWUAElEpTyppBamcL5UhAolOCcDdHt4e42hp8OUmtXoYtN_9JSyYXRfUP2KXJI-RhSilDZbaw3Lv5agu3Ond25szt39uiu79wcOjUA_OeNwoxJyv4AYTpqiA</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Xue, Yaxu</creator><creator>Yu, Yadong</creator><creator>Yin, Kaiyang</creator><creator>Li, Pengfei</creator><creator>Xie, Shuangxi</creator><creator>Ju, Zhaojie</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-1718-9551</orcidid><orcidid>https://orcid.org/0000-0001-9966-1324</orcidid><orcidid>https://orcid.org/0000-0002-9218-4251</orcidid><orcidid>https://orcid.org/0000-0003-4161-233X</orcidid><orcidid>https://orcid.org/0000-0002-9524-7609</orcidid><orcidid>https://orcid.org/0000-0002-6721-2337</orcidid></search><sort><creationdate>20220401</creationdate><title>Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion</title><author>Xue, Yaxu ; Yu, Yadong ; Yin, Kaiyang ; Li, Pengfei ; Xie, Shuangxi ; Ju, Zhaojie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Character recognition</topic><topic>Contact force</topic><topic>Data acquisition</topic><topic>Data analysis</topic><topic>Data integration</topic><topic>Empirical analysis</topic><topic>empirical mode decomposition</topic><topic>Feature extraction</topic><topic>Force</topic><topic>Gesture recognition</topic><topic>human in-hand motion</topic><topic>Human motion</topic><topic>Liapunov exponents</topic><topic>Machine learning</topic><topic>maximum Lyapunov exponent</topic><topic>Modal data</topic><topic>Motion perception</topic><topic>Multi-modal information</topic><topic>random forest</topic><topic>Recognition</topic><topic>Segmentation</topic><topic>Sensor phenomena and characterization</topic><topic>Sensors</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xue, Yaxu</creatorcontrib><creatorcontrib>Yu, Yadong</creatorcontrib><creatorcontrib>Yin, Kaiyang</creatorcontrib><creatorcontrib>Li, Pengfei</creatorcontrib><creatorcontrib>Xie, Shuangxi</creatorcontrib><creatorcontrib>Ju, Zhaojie</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xue, Yaxu</au><au>Yu, Yadong</au><au>Yin, Kaiyang</au><au>Li, Pengfei</au><au>Xie, Shuangxi</au><au>Ju, Zhaojie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2022-04-01</date><risdate>2022</risdate><volume>22</volume><issue>7</issue><spage>6793</spage><epage>6805</epage><pages>6793-6805</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combined with the characteristics of HIM capture, ten kinds of HIM sets are designed, and finger trajectory, contact force and electromyographic signal data are acquired synchronously through the multi-modal data acquisition platform; second, motion segmentation is realized through the threshold segmentation method, the multi-modal signal preprocessing is realized by Empirical Mode Decomposition (EMD), and multi-modal signal feature extraction is realized by Maximum Lyapunov Exponent (MLE); then, a detailed non-linear data analysis is carried out. A detailed analysis and discussion are presented from the results of the Random Forest (RF) recognizing HIMs, the comparison results of motion recognition rates of different subjects, the comparison results of motion recognition rates of different perceptrons, and the comparison results of the motion recognition rates of different machine learning methods. The experimental results show that the multi-modal perception information based HIM recognition system proposed in this paper can effectively recognize ten different HIMs, with an accuracy rate of 93.72%.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2022.3148992</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-1718-9551</orcidid><orcidid>https://orcid.org/0000-0001-9966-1324</orcidid><orcidid>https://orcid.org/0000-0002-9218-4251</orcidid><orcidid>https://orcid.org/0000-0003-4161-233X</orcidid><orcidid>https://orcid.org/0000-0002-9524-7609</orcidid><orcidid>https://orcid.org/0000-0002-6721-2337</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2022-04, Vol.22 (7), p.6793-6805 |
issn | 1530-437X 1558-1748 |
language | eng |
recordid | cdi_ieee_primary_9703362 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Character recognition Contact force Data acquisition Data analysis Data integration Empirical analysis empirical mode decomposition Feature extraction Force Gesture recognition human in-hand motion Human motion Liapunov exponents Machine learning maximum Lyapunov exponent Modal data Motion perception Multi-modal information random forest Recognition Segmentation Sensor phenomena and characterization Sensors Wavelet transforms |
title | Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T19%3A33%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Human%20In-Hand%20Motion%20Recognition%20Based%20on%20Multi-Modal%20Perception%20Information%20Fusion&rft.jtitle=IEEE%20sensors%20journal&rft.au=Xue,%20Yaxu&rft.date=2022-04-01&rft.volume=22&rft.issue=7&rft.spage=6793&rft.epage=6805&rft.pages=6793-6805&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2022.3148992&rft_dat=%3Cproquest_ieee_%3E2645986876%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c223t-490ebbf67cc7654d5feb229a8b1f88086605efe216e77c9b6879bacd51eb75443%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2645986876&rft_id=info:pmid/&rft_ieee_id=9703362&rfr_iscdi=true |