Loading…

Source Aware Deep Learning Framework for Hand Kinematic Reconstruction Using EEG Signal

The ability to reconstruct the kinematic parameters of hand movement using noninvasive electroencephalography (EEG) is essential for strength and endurance augmentation using exoskeleton/exosuit. For system development, the conventional classification-based brain-computer interface (BCI) controls ex...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on cybernetics 2023-07, Vol.53 (7), p.4094-4106
Main Authors: Pancholi, Sidharth, Giri, Amita, Jain, Anant, Kumar, Lalan, Roy, Sitikantha
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033
cites cdi_FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033
container_end_page 4106
container_issue 7
container_start_page 4094
container_title IEEE transactions on cybernetics
container_volume 53
creator Pancholi, Sidharth
Giri, Amita
Jain, Anant
Kumar, Lalan
Roy, Sitikantha
description The ability to reconstruct the kinematic parameters of hand movement using noninvasive electroencephalography (EEG) is essential for strength and endurance augmentation using exoskeleton/exosuit. For system development, the conventional classification-based brain-computer interface (BCI) controls external devices by providing discrete control signals to the actuator. A continuous kinematic reconstruction from EEG signal is better suited for practical BCI applications. The state-of-the-art multivariable linear regression (mLR) method provides a continuous estimate of hand kinematics, achieving a maximum correlation of up to 0.67 between the measured and the estimated hand trajectory. In this work, three novel source aware deep learning models are proposed for motion trajectory prediction (MTP). In particular, multilayer perceptron (MLP), convolutional neural network-long short-term memory (CNN-LSTM), and wavelet packet decomposition (WPD) for CNN-LSTM are presented. In addition, novelty in the work includes the utilization of brain source localization (BSL) [using standardized low-resolution brain electromagnetic tomography (sLORETA)] for the reliable decoding of motor intention. The information is utilized for channel selection and accurate EEG time segment selection. The performance of the proposed models is compared with the traditionally utilized mLR technique on the reach, grasp, and lift (GAL) dataset. The effectiveness of the proposed framework is established using the Pearson correlation coefficient (PCC) and trajectory analysis. A significant improvement in the correlation coefficient is observed when compared with the state-of-the-art mLR model. Our work bridges the gap between the control and the actuator block, enabling real-time BCI implementation.
doi_str_mv 10.1109/TCYB.2022.3166604
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCYB_2022_3166604</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9771056</ieee_id><sourcerecordid>2661488754</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033</originalsourceid><addsrcrecordid>eNpdkF1LwzAUhoMoKtMfIIIEvPGmMx9N0l7OuQ9xIOiGeFXOslPpXNuZtAz_vRmbuzA3OSTPe07yEHLFWZdzlt5P-x8PXcGE6EqutWbxETkXXCeREEYdH2ptzsil90sWVhKO0uSUnEmlpORKnJP3t7p1FmlvAw7pI-KaThBcVVSfdOigxE3tvmheOzqGakGfiwpLaApLX9HWlW9ca5uirujMbxODwYi-FZ8VrC7ISQ4rj5f7vUNmw8G0P44mL6Onfm8SWRmnTaQ4Y6GwAAZFbISOmVgIkLkFOecA8xykhdww5BoSm_OcaaXmPLVoYsmk7JC7Xd-1q79b9E1WFt7iagUV1q3PhNY8ThKj4oDe_kOX4e_hrYFKwmRjZGoCxXeUdbX3DvNs7YoS3E_GWbYVn23FZ1vx2V58yNzsO7fzEheHxJ_mAFzvgAIRD9epMZwpLX8BAHuFQg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2826477397</pqid></control><display><type>article</type><title>Source Aware Deep Learning Framework for Hand Kinematic Reconstruction Using EEG Signal</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Pancholi, Sidharth ; Giri, Amita ; Jain, Anant ; Kumar, Lalan ; Roy, Sitikantha</creator><creatorcontrib>Pancholi, Sidharth ; Giri, Amita ; Jain, Anant ; Kumar, Lalan ; Roy, Sitikantha</creatorcontrib><description>The ability to reconstruct the kinematic parameters of hand movement using noninvasive electroencephalography (EEG) is essential for strength and endurance augmentation using exoskeleton/exosuit. For system development, the conventional classification-based brain-computer interface (BCI) controls external devices by providing discrete control signals to the actuator. A continuous kinematic reconstruction from EEG signal is better suited for practical BCI applications. The state-of-the-art multivariable linear regression (mLR) method provides a continuous estimate of hand kinematics, achieving a maximum correlation of up to 0.67 between the measured and the estimated hand trajectory. In this work, three novel source aware deep learning models are proposed for motion trajectory prediction (MTP). In particular, multilayer perceptron (MLP), convolutional neural network-long short-term memory (CNN-LSTM), and wavelet packet decomposition (WPD) for CNN-LSTM are presented. In addition, novelty in the work includes the utilization of brain source localization (BSL) [using standardized low-resolution brain electromagnetic tomography (sLORETA)] for the reliable decoding of motor intention. The information is utilized for channel selection and accurate EEG time segment selection. The performance of the proposed models is compared with the traditionally utilized mLR technique on the reach, grasp, and lift (GAL) dataset. The effectiveness of the proposed framework is established using the Pearson correlation coefficient (PCC) and trajectory analysis. A significant improvement in the correlation coefficient is observed when compared with the state-of-the-art mLR model. Our work bridges the gap between the control and the actuator block, enabling real-time BCI implementation.</description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2022.3166604</identifier><identifier>PMID: 35533152</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Actuators ; Artificial neural networks ; Brain modeling ; Brain-computer interface (BCI) ; Convolutional neural networks ; Correlation coefficients ; Deep learning ; Electroencephalography ; electroencephalography (EEG) ; Exoskeletons ; Human-computer interface ; intention mapping ; Kinematics ; Location awareness ; Machine learning ; motion trajectory prediction (MTP) ; Multilayer perceptrons ; noninvasive ; Reconstruction ; source localization ; State of the art ; Trajectory ; Trajectory analysis ; Wavelet transforms</subject><ispartof>IEEE transactions on cybernetics, 2023-07, Vol.53 (7), p.4094-4106</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033</citedby><cites>FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033</cites><orcidid>0000-0003-3450-9617 ; 0000-0001-7000-7492 ; 0000-0002-4564-0269 ; 0000-0003-2720-4620 ; 0000-0002-7131-8310</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9771056$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35533152$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Pancholi, Sidharth</creatorcontrib><creatorcontrib>Giri, Amita</creatorcontrib><creatorcontrib>Jain, Anant</creatorcontrib><creatorcontrib>Kumar, Lalan</creatorcontrib><creatorcontrib>Roy, Sitikantha</creatorcontrib><title>Source Aware Deep Learning Framework for Hand Kinematic Reconstruction Using EEG Signal</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE Trans Cybern</addtitle><description>The ability to reconstruct the kinematic parameters of hand movement using noninvasive electroencephalography (EEG) is essential for strength and endurance augmentation using exoskeleton/exosuit. For system development, the conventional classification-based brain-computer interface (BCI) controls external devices by providing discrete control signals to the actuator. A continuous kinematic reconstruction from EEG signal is better suited for practical BCI applications. The state-of-the-art multivariable linear regression (mLR) method provides a continuous estimate of hand kinematics, achieving a maximum correlation of up to 0.67 between the measured and the estimated hand trajectory. In this work, three novel source aware deep learning models are proposed for motion trajectory prediction (MTP). In particular, multilayer perceptron (MLP), convolutional neural network-long short-term memory (CNN-LSTM), and wavelet packet decomposition (WPD) for CNN-LSTM are presented. In addition, novelty in the work includes the utilization of brain source localization (BSL) [using standardized low-resolution brain electromagnetic tomography (sLORETA)] for the reliable decoding of motor intention. The information is utilized for channel selection and accurate EEG time segment selection. The performance of the proposed models is compared with the traditionally utilized mLR technique on the reach, grasp, and lift (GAL) dataset. The effectiveness of the proposed framework is established using the Pearson correlation coefficient (PCC) and trajectory analysis. A significant improvement in the correlation coefficient is observed when compared with the state-of-the-art mLR model. Our work bridges the gap between the control and the actuator block, enabling real-time BCI implementation.</description><subject>Actuators</subject><subject>Artificial neural networks</subject><subject>Brain modeling</subject><subject>Brain-computer interface (BCI)</subject><subject>Convolutional neural networks</subject><subject>Correlation coefficients</subject><subject>Deep learning</subject><subject>Electroencephalography</subject><subject>electroencephalography (EEG)</subject><subject>Exoskeletons</subject><subject>Human-computer interface</subject><subject>intention mapping</subject><subject>Kinematics</subject><subject>Location awareness</subject><subject>Machine learning</subject><subject>motion trajectory prediction (MTP)</subject><subject>Multilayer perceptrons</subject><subject>noninvasive</subject><subject>Reconstruction</subject><subject>source localization</subject><subject>State of the art</subject><subject>Trajectory</subject><subject>Trajectory analysis</subject><subject>Wavelet transforms</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpdkF1LwzAUhoMoKtMfIIIEvPGmMx9N0l7OuQ9xIOiGeFXOslPpXNuZtAz_vRmbuzA3OSTPe07yEHLFWZdzlt5P-x8PXcGE6EqutWbxETkXXCeREEYdH2ptzsil90sWVhKO0uSUnEmlpORKnJP3t7p1FmlvAw7pI-KaThBcVVSfdOigxE3tvmheOzqGakGfiwpLaApLX9HWlW9ca5uirujMbxODwYi-FZ8VrC7ISQ4rj5f7vUNmw8G0P44mL6Onfm8SWRmnTaQ4Y6GwAAZFbISOmVgIkLkFOecA8xykhdww5BoSm_OcaaXmPLVoYsmk7JC7Xd-1q79b9E1WFt7iagUV1q3PhNY8ThKj4oDe_kOX4e_hrYFKwmRjZGoCxXeUdbX3DvNs7YoS3E_GWbYVn23FZ1vx2V58yNzsO7fzEheHxJ_mAFzvgAIRD9epMZwpLX8BAHuFQg</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Pancholi, Sidharth</creator><creator>Giri, Amita</creator><creator>Jain, Anant</creator><creator>Kumar, Lalan</creator><creator>Roy, Sitikantha</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3450-9617</orcidid><orcidid>https://orcid.org/0000-0001-7000-7492</orcidid><orcidid>https://orcid.org/0000-0002-4564-0269</orcidid><orcidid>https://orcid.org/0000-0003-2720-4620</orcidid><orcidid>https://orcid.org/0000-0002-7131-8310</orcidid></search><sort><creationdate>20230701</creationdate><title>Source Aware Deep Learning Framework for Hand Kinematic Reconstruction Using EEG Signal</title><author>Pancholi, Sidharth ; Giri, Amita ; Jain, Anant ; Kumar, Lalan ; Roy, Sitikantha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Actuators</topic><topic>Artificial neural networks</topic><topic>Brain modeling</topic><topic>Brain-computer interface (BCI)</topic><topic>Convolutional neural networks</topic><topic>Correlation coefficients</topic><topic>Deep learning</topic><topic>Electroencephalography</topic><topic>electroencephalography (EEG)</topic><topic>Exoskeletons</topic><topic>Human-computer interface</topic><topic>intention mapping</topic><topic>Kinematics</topic><topic>Location awareness</topic><topic>Machine learning</topic><topic>motion trajectory prediction (MTP)</topic><topic>Multilayer perceptrons</topic><topic>noninvasive</topic><topic>Reconstruction</topic><topic>source localization</topic><topic>State of the art</topic><topic>Trajectory</topic><topic>Trajectory analysis</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pancholi, Sidharth</creatorcontrib><creatorcontrib>Giri, Amita</creatorcontrib><creatorcontrib>Jain, Anant</creatorcontrib><creatorcontrib>Kumar, Lalan</creatorcontrib><creatorcontrib>Roy, Sitikantha</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pancholi, Sidharth</au><au>Giri, Amita</au><au>Jain, Anant</au><au>Kumar, Lalan</au><au>Roy, Sitikantha</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Source Aware Deep Learning Framework for Hand Kinematic Reconstruction Using EEG Signal</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2023-07-01</date><risdate>2023</risdate><volume>53</volume><issue>7</issue><spage>4094</spage><epage>4106</epage><pages>4094-4106</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract>The ability to reconstruct the kinematic parameters of hand movement using noninvasive electroencephalography (EEG) is essential for strength and endurance augmentation using exoskeleton/exosuit. For system development, the conventional classification-based brain-computer interface (BCI) controls external devices by providing discrete control signals to the actuator. A continuous kinematic reconstruction from EEG signal is better suited for practical BCI applications. The state-of-the-art multivariable linear regression (mLR) method provides a continuous estimate of hand kinematics, achieving a maximum correlation of up to 0.67 between the measured and the estimated hand trajectory. In this work, three novel source aware deep learning models are proposed for motion trajectory prediction (MTP). In particular, multilayer perceptron (MLP), convolutional neural network-long short-term memory (CNN-LSTM), and wavelet packet decomposition (WPD) for CNN-LSTM are presented. In addition, novelty in the work includes the utilization of brain source localization (BSL) [using standardized low-resolution brain electromagnetic tomography (sLORETA)] for the reliable decoding of motor intention. The information is utilized for channel selection and accurate EEG time segment selection. The performance of the proposed models is compared with the traditionally utilized mLR technique on the reach, grasp, and lift (GAL) dataset. The effectiveness of the proposed framework is established using the Pearson correlation coefficient (PCC) and trajectory analysis. A significant improvement in the correlation coefficient is observed when compared with the state-of-the-art mLR model. Our work bridges the gap between the control and the actuator block, enabling real-time BCI implementation.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>35533152</pmid><doi>10.1109/TCYB.2022.3166604</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-3450-9617</orcidid><orcidid>https://orcid.org/0000-0001-7000-7492</orcidid><orcidid>https://orcid.org/0000-0002-4564-0269</orcidid><orcidid>https://orcid.org/0000-0003-2720-4620</orcidid><orcidid>https://orcid.org/0000-0002-7131-8310</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2168-2267
ispartof IEEE transactions on cybernetics, 2023-07, Vol.53 (7), p.4094-4106
issn 2168-2267
2168-2275
language eng
recordid cdi_crossref_primary_10_1109_TCYB_2022_3166604
source IEEE Electronic Library (IEL) Journals
subjects Actuators
Artificial neural networks
Brain modeling
Brain-computer interface (BCI)
Convolutional neural networks
Correlation coefficients
Deep learning
Electroencephalography
electroencephalography (EEG)
Exoskeletons
Human-computer interface
intention mapping
Kinematics
Location awareness
Machine learning
motion trajectory prediction (MTP)
Multilayer perceptrons
noninvasive
Reconstruction
source localization
State of the art
Trajectory
Trajectory analysis
Wavelet transforms
title Source Aware Deep Learning Framework for Hand Kinematic Reconstruction Using EEG Signal
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T06%3A31%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Source%20Aware%20Deep%20Learning%20Framework%20for%20Hand%20Kinematic%20Reconstruction%20Using%20EEG%20Signal&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Pancholi,%20Sidharth&rft.date=2023-07-01&rft.volume=53&rft.issue=7&rft.spage=4094&rft.epage=4106&rft.pages=4094-4106&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2022.3166604&rft_dat=%3Cproquest_cross%3E2661488754%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c349t-5100349caa7e24726402d2a3fca3b1aabfa3caf70e16a8cf1f0655b19ce743033%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2826477397&rft_id=info:pmid/35533152&rft_ieee_id=9771056&rfr_iscdi=true