Loading…
sEMG-Based Gesture Recognition With Embedded Virtual Hand Poses and Adversarial Learning
To improve the accuracy of surface electromyography (sEMG)-based gesture recognition, we present a novel hybrid approach that combines real sEMG signals with corresponding virtual hand poses. The virtual hand poses are generated by means of a proposed cross-modal association model constructed based...
Saved in:
Published in: | IEEE access 2019, Vol.7, p.104108-104120 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123 |
---|---|
cites | cdi_FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123 |
container_end_page | 104120 |
container_issue | |
container_start_page | 104108 |
container_title | IEEE access |
container_volume | 7 |
creator | Hu, Yu Wong, Yongkang Dai, Qingfeng Kankanhalli, Mohan Geng, Weidong Li, Xiangdong |
description | To improve the accuracy of surface electromyography (sEMG)-based gesture recognition, we present a novel hybrid approach that combines real sEMG signals with corresponding virtual hand poses. The virtual hand poses are generated by means of a proposed cross-modal association model constructed based on the adversarial learning to capture the intrinsic relationship between the sEMG signals and the hand poses. We report comprehensive evaluations of the proposed approach for both frame- and window-based sEMG gesture recognitions on seven-sparse-multichannel and four-high-density-benchmark databases. The experimental results show that the proposed approach achieves significant improvements in sEMG-based gesture recognition compared to existing works. For frame-based sEMG gesture recognition, the recognition accuracy of the proposed framework is increased by an average of +5.2% on the sparse multichannel sEMG databases and by an average of +6.7% on the high-density sEMG databases compared to the existing methods. For window-based sEMG gesture recognition, the state-of-the-art recognition accuracies on three of the high-density sEMG databases are already higher than 99%, i.e., almost saturated; nevertheless, we achieve a +0.2% improvement. For the remaining eight sEMG databases, the average improvement with the proposed framework for the window-based approach is +2.5%. |
doi_str_mv | 10.1109/ACCESS.2019.2930005 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_proquest_journals_2455631985</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8766972</ieee_id><doaj_id>oai_doaj_org_article_162171b3c24d4261a5b223ec432be4c3</doaj_id><sourcerecordid>2455631985</sourcerecordid><originalsourceid>FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123</originalsourceid><addsrcrecordid>eNpNUU1PwkAQbYwmEuUXcGniubjfbY9IEEgwGvHrttnuDrgEurjbmvjvXSwhnmYy896bl3lJMsBoiDEqb0fj8WS5HBKEyyEpKUKInyU9gkWZUU7F-b_-MumHsIkIVMQRz3vJR5g8TLM7FcCkUwhN6yF9Bu3WtW2sq9N323ymk10FxkTEm_VNq7bpTNUmfXIBQnroRuYbfFDextUClK9tvb5OLlZqG6B_rFfJ6_3kZTzLFo_T-Xi0yDTjRZMJzsHgFaqYUhXihGsNjDEquCHRJCuiUSgMNrnGSHFSFoQabirNco4pJvQqmXe6xqmN3Hu7U_5HOmXl38D5tVS-sXoLEguCc1xRTZhhRGDFK0IoaEZJBUzTqHXTae29-2rjN-TGtb6O9iVhnAuKy4JHFO1Q2rsQPKxOVzGSh0Rkl4g8JCKPiUTWoGNZADgxilyIMif0F-rxhJM</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455631985</pqid></control><display><type>article</type><title>sEMG-Based Gesture Recognition With Embedded Virtual Hand Poses and Adversarial Learning</title><source>IEEE Xplore Open Access Journals</source><creator>Hu, Yu ; Wong, Yongkang ; Dai, Qingfeng ; Kankanhalli, Mohan ; Geng, Weidong ; Li, Xiangdong</creator><creatorcontrib>Hu, Yu ; Wong, Yongkang ; Dai, Qingfeng ; Kankanhalli, Mohan ; Geng, Weidong ; Li, Xiangdong</creatorcontrib><description>To improve the accuracy of surface electromyography (sEMG)-based gesture recognition, we present a novel hybrid approach that combines real sEMG signals with corresponding virtual hand poses. The virtual hand poses are generated by means of a proposed cross-modal association model constructed based on the adversarial learning to capture the intrinsic relationship between the sEMG signals and the hand poses. We report comprehensive evaluations of the proposed approach for both frame- and window-based sEMG gesture recognitions on seven-sparse-multichannel and four-high-density-benchmark databases. The experimental results show that the proposed approach achieves significant improvements in sEMG-based gesture recognition compared to existing works. For frame-based sEMG gesture recognition, the recognition accuracy of the proposed framework is increased by an average of +5.2% on the sparse multichannel sEMG databases and by an average of +6.7% on the high-density sEMG databases compared to the existing methods. For window-based sEMG gesture recognition, the state-of-the-art recognition accuracies on three of the high-density sEMG databases are already higher than 99%, i.e., almost saturated; nevertheless, we achieve a +0.2% improvement. For the remaining eight sEMG databases, the average improvement with the proposed framework for the window-based approach is +2.5%.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2930005</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Assistive technology ; Density ; Electrodes ; Feature extraction ; generative adversarial learning ; Gesture recognition ; Hand gesture recognition ; Learning ; Machine learning ; myoelectric control ; surface electromyography (sEMG) ; User experience ; virtual hand pose</subject><ispartof>IEEE access, 2019, Vol.7, p.104108-104120</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123</citedby><cites>FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123</cites><orcidid>0000-0002-6302-3539</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8766972$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Hu, Yu</creatorcontrib><creatorcontrib>Wong, Yongkang</creatorcontrib><creatorcontrib>Dai, Qingfeng</creatorcontrib><creatorcontrib>Kankanhalli, Mohan</creatorcontrib><creatorcontrib>Geng, Weidong</creatorcontrib><creatorcontrib>Li, Xiangdong</creatorcontrib><title>sEMG-Based Gesture Recognition With Embedded Virtual Hand Poses and Adversarial Learning</title><title>IEEE access</title><addtitle>Access</addtitle><description>To improve the accuracy of surface electromyography (sEMG)-based gesture recognition, we present a novel hybrid approach that combines real sEMG signals with corresponding virtual hand poses. The virtual hand poses are generated by means of a proposed cross-modal association model constructed based on the adversarial learning to capture the intrinsic relationship between the sEMG signals and the hand poses. We report comprehensive evaluations of the proposed approach for both frame- and window-based sEMG gesture recognitions on seven-sparse-multichannel and four-high-density-benchmark databases. The experimental results show that the proposed approach achieves significant improvements in sEMG-based gesture recognition compared to existing works. For frame-based sEMG gesture recognition, the recognition accuracy of the proposed framework is increased by an average of +5.2% on the sparse multichannel sEMG databases and by an average of +6.7% on the high-density sEMG databases compared to the existing methods. For window-based sEMG gesture recognition, the state-of-the-art recognition accuracies on three of the high-density sEMG databases are already higher than 99%, i.e., almost saturated; nevertheless, we achieve a +0.2% improvement. For the remaining eight sEMG databases, the average improvement with the proposed framework for the window-based approach is +2.5%.</description><subject>Assistive technology</subject><subject>Density</subject><subject>Electrodes</subject><subject>Feature extraction</subject><subject>generative adversarial learning</subject><subject>Gesture recognition</subject><subject>Hand gesture recognition</subject><subject>Learning</subject><subject>Machine learning</subject><subject>myoelectric control</subject><subject>surface electromyography (sEMG)</subject><subject>User experience</subject><subject>virtual hand pose</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1PwkAQbYwmEuUXcGniubjfbY9IEEgwGvHrttnuDrgEurjbmvjvXSwhnmYy896bl3lJMsBoiDEqb0fj8WS5HBKEyyEpKUKInyU9gkWZUU7F-b_-MumHsIkIVMQRz3vJR5g8TLM7FcCkUwhN6yF9Bu3WtW2sq9N323ymk10FxkTEm_VNq7bpTNUmfXIBQnroRuYbfFDextUClK9tvb5OLlZqG6B_rFfJ6_3kZTzLFo_T-Xi0yDTjRZMJzsHgFaqYUhXihGsNjDEquCHRJCuiUSgMNrnGSHFSFoQabirNco4pJvQqmXe6xqmN3Hu7U_5HOmXl38D5tVS-sXoLEguCc1xRTZhhRGDFK0IoaEZJBUzTqHXTae29-2rjN-TGtb6O9iVhnAuKy4JHFO1Q2rsQPKxOVzGSh0Rkl4g8JCKPiUTWoGNZADgxilyIMif0F-rxhJM</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Hu, Yu</creator><creator>Wong, Yongkang</creator><creator>Dai, Qingfeng</creator><creator>Kankanhalli, Mohan</creator><creator>Geng, Weidong</creator><creator>Li, Xiangdong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-6302-3539</orcidid></search><sort><creationdate>2019</creationdate><title>sEMG-Based Gesture Recognition With Embedded Virtual Hand Poses and Adversarial Learning</title><author>Hu, Yu ; Wong, Yongkang ; Dai, Qingfeng ; Kankanhalli, Mohan ; Geng, Weidong ; Li, Xiangdong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Assistive technology</topic><topic>Density</topic><topic>Electrodes</topic><topic>Feature extraction</topic><topic>generative adversarial learning</topic><topic>Gesture recognition</topic><topic>Hand gesture recognition</topic><topic>Learning</topic><topic>Machine learning</topic><topic>myoelectric control</topic><topic>surface electromyography (sEMG)</topic><topic>User experience</topic><topic>virtual hand pose</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hu, Yu</creatorcontrib><creatorcontrib>Wong, Yongkang</creatorcontrib><creatorcontrib>Dai, Qingfeng</creatorcontrib><creatorcontrib>Kankanhalli, Mohan</creatorcontrib><creatorcontrib>Geng, Weidong</creatorcontrib><creatorcontrib>Li, Xiangdong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Open Access: DOAJ - Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Yu</au><au>Wong, Yongkang</au><au>Dai, Qingfeng</au><au>Kankanhalli, Mohan</au><au>Geng, Weidong</au><au>Li, Xiangdong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>sEMG-Based Gesture Recognition With Embedded Virtual Hand Poses and Adversarial Learning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>104108</spage><epage>104120</epage><pages>104108-104120</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>To improve the accuracy of surface electromyography (sEMG)-based gesture recognition, we present a novel hybrid approach that combines real sEMG signals with corresponding virtual hand poses. The virtual hand poses are generated by means of a proposed cross-modal association model constructed based on the adversarial learning to capture the intrinsic relationship between the sEMG signals and the hand poses. We report comprehensive evaluations of the proposed approach for both frame- and window-based sEMG gesture recognitions on seven-sparse-multichannel and four-high-density-benchmark databases. The experimental results show that the proposed approach achieves significant improvements in sEMG-based gesture recognition compared to existing works. For frame-based sEMG gesture recognition, the recognition accuracy of the proposed framework is increased by an average of +5.2% on the sparse multichannel sEMG databases and by an average of +6.7% on the high-density sEMG databases compared to the existing methods. For window-based sEMG gesture recognition, the state-of-the-art recognition accuracies on three of the high-density sEMG databases are already higher than 99%, i.e., almost saturated; nevertheless, we achieve a +0.2% improvement. For the remaining eight sEMG databases, the average improvement with the proposed framework for the window-based approach is +2.5%.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2930005</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-6302-3539</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2019, Vol.7, p.104108-104120 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_proquest_journals_2455631985 |
source | IEEE Xplore Open Access Journals |
subjects | Assistive technology Density Electrodes Feature extraction generative adversarial learning Gesture recognition Hand gesture recognition Learning Machine learning myoelectric control surface electromyography (sEMG) User experience virtual hand pose |
title | sEMG-Based Gesture Recognition With Embedded Virtual Hand Poses and Adversarial Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T20%3A14%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=sEMG-Based%20Gesture%20Recognition%20With%20Embedded%20Virtual%20Hand%20Poses%20and%20Adversarial%20Learning&rft.jtitle=IEEE%20access&rft.au=Hu,%20Yu&rft.date=2019&rft.volume=7&rft.spage=104108&rft.epage=104120&rft.pages=104108-104120&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2930005&rft_dat=%3Cproquest_doaj_%3E2455631985%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c458t-655ed1f0b4aab0525cce444365d208148816e8d1d7c10a529823d5dbc47513123%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2455631985&rft_id=info:pmid/&rft_ieee_id=8766972&rfr_iscdi=true |