Loading…

Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning

Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neur...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in rehabilitation sciences 2022-08, Vol.3, p.956381
Main Authors: Agami, Shahar, Riemer, Raziel, Berman, Sigal
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3
cites cdi_FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3
container_end_page
container_issue
container_start_page 956381
container_title Frontiers in rehabilitation sciences
container_volume 3
creator Agami, Shahar
Riemer, Raziel
Berman, Sigal
description Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neural network (RNN). Yet, training an RNN requires a considerable amount of relevant and accurate training data. Training databases can be obtained using gold-standard motion tracking sensors. This limits the use of the RNN trackers in environments and tasks that lack accessibility to gold-standard sensors. Digital goniometers are typically cheaper, more portable, and simpler to use than gold-standard motion tracking sensors. The current work suggests a method for generating accurate skeleton data suitable for training an RNN motion tracker based on the offline fusion of a Kinect 3D video sensor and an electronic goniometer. The fusion applies nonlinear constraint optimization, where the constraints are based on an advanced shoulder-centered kinematic model of the arm. The model builds on the representation of the arm as a triangle (the arm triangle). The shoulder-centered representation of the arm triangle motion simplifies constraint representation and consequently the optimization problem. To test the performance of the offline fusion and the RNN trained using the optimized data, arm motion of eight participants was recorded using a Kinect sensor, an electronic goniometer, and, for comparison, a passive-marker-based motion tracker. The data generated by fusing the Kinect and goniometer recordings were used for training two long short-term memory (LSTM) RNNs. The input to one RNN included both the Kinect and the goniometer data, and the input to the second RNN included only Kinect data. The performance of the networks was compared to the performance of a tracker based on a Kalman filter and to the raw Kinect measurements. The accuracy of the fused data was high, and it considerably improved data accuracy. The accuracy for both trackers was high, and both were more accurate than the Kalman filter tracker and the raw Kinect measurements. The developed methods are suitable for integration with immersive VR rehabilitation systems in the clinic and the home environments.
doi_str_mv 10.3389/fresc.2022.956381
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_ae8b184ec5b44d15ac42cf8d516d2e24</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_ae8b184ec5b44d15ac42cf8d516d2e24</doaj_id><sourcerecordid>2720927664</sourcerecordid><originalsourceid>FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3</originalsourceid><addsrcrecordid>eNpVks1u1DAUhSMEolXpA7BBXrJopv6JHWeDhEqBSpW6gbXlXF9PXRJ7sJNWXfEqPAtPRmamrdqVr6_P-a5ln6p6z-hKCN2d-owFVpxyvuqkEpq9qg65akWttGKvn9UH1XEpN5RSLplopX5bHQjFtO4acVj9OY_XNkKIazKmKaRIpmzh13ZvAealvifJE0uGdFdDKhMRX8htcJhIwVhSJnPZiUkf0oiwwALYYYE5HE4eNX4RpXhCbHTEIW7-_R3Q5rgY31VvvB0KHj-sR9XPr-c_zr7Xl1ffLs4-X9bQKDnVHhi2wDSjwFrXSueccNJpTVumaSNYpzwyjQK1siCsUI3vmQTqldbOojiqLvZcl-yN2eQw2nxvkg1m10h5bWyeAgxoLOqe6QZB9k3jmLTQcPDaSaYcR94srE971mbuR3SAcXmz4QX05UkM12adbk0nurYTbAF8fADk9HvGMpkxFMBhsBHTXAxvOe14q9R2FttLIadSMvqnMYyabQ7MLgdmmwOzz8Hi-fD8fk-Ox18X_wEqfrLQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2720927664</pqid></control><display><type>article</type><title>Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning</title><source>PubMed Central</source><creator>Agami, Shahar ; Riemer, Raziel ; Berman, Sigal</creator><creatorcontrib>Agami, Shahar ; Riemer, Raziel ; Berman, Sigal</creatorcontrib><description>Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neural network (RNN). Yet, training an RNN requires a considerable amount of relevant and accurate training data. Training databases can be obtained using gold-standard motion tracking sensors. This limits the use of the RNN trackers in environments and tasks that lack accessibility to gold-standard sensors. Digital goniometers are typically cheaper, more portable, and simpler to use than gold-standard motion tracking sensors. The current work suggests a method for generating accurate skeleton data suitable for training an RNN motion tracker based on the offline fusion of a Kinect 3D video sensor and an electronic goniometer. The fusion applies nonlinear constraint optimization, where the constraints are based on an advanced shoulder-centered kinematic model of the arm. The model builds on the representation of the arm as a triangle (the arm triangle). The shoulder-centered representation of the arm triangle motion simplifies constraint representation and consequently the optimization problem. To test the performance of the offline fusion and the RNN trained using the optimized data, arm motion of eight participants was recorded using a Kinect sensor, an electronic goniometer, and, for comparison, a passive-marker-based motion tracker. The data generated by fusing the Kinect and goniometer recordings were used for training two long short-term memory (LSTM) RNNs. The input to one RNN included both the Kinect and the goniometer data, and the input to the second RNN included only Kinect data. The performance of the networks was compared to the performance of a tracker based on a Kalman filter and to the raw Kinect measurements. The accuracy of the fused data was high, and it considerably improved data accuracy. The accuracy for both trackers was high, and both were more accurate than the Kalman filter tracker and the raw Kinect measurements. The developed methods are suitable for integration with immersive VR rehabilitation systems in the clinic and the home environments.</description><identifier>ISSN: 2673-6861</identifier><identifier>EISSN: 2673-6861</identifier><identifier>DOI: 10.3389/fresc.2022.956381</identifier><identifier>PMID: 36188943</identifier><language>eng</language><publisher>Switzerland: Frontiers Media S.A</publisher><subject>kinematics ; recurrent neural network (RNN) ; rehabilitation ; Rehabilitation Sciences ; upper limb ; virtual reality</subject><ispartof>Frontiers in rehabilitation sciences, 2022-08, Vol.3, p.956381</ispartof><rights>2022 Agami, Rimer and Berman.</rights><rights>2022 Agami, Rimer and Berman. 2022 Agami, Rimer and Berman</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3</citedby><cites>FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9397931/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9397931/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36188943$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Agami, Shahar</creatorcontrib><creatorcontrib>Riemer, Raziel</creatorcontrib><creatorcontrib>Berman, Sigal</creatorcontrib><title>Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning</title><title>Frontiers in rehabilitation sciences</title><addtitle>Front Rehabil Sci</addtitle><description>Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neural network (RNN). Yet, training an RNN requires a considerable amount of relevant and accurate training data. Training databases can be obtained using gold-standard motion tracking sensors. This limits the use of the RNN trackers in environments and tasks that lack accessibility to gold-standard sensors. Digital goniometers are typically cheaper, more portable, and simpler to use than gold-standard motion tracking sensors. The current work suggests a method for generating accurate skeleton data suitable for training an RNN motion tracker based on the offline fusion of a Kinect 3D video sensor and an electronic goniometer. The fusion applies nonlinear constraint optimization, where the constraints are based on an advanced shoulder-centered kinematic model of the arm. The model builds on the representation of the arm as a triangle (the arm triangle). The shoulder-centered representation of the arm triangle motion simplifies constraint representation and consequently the optimization problem. To test the performance of the offline fusion and the RNN trained using the optimized data, arm motion of eight participants was recorded using a Kinect sensor, an electronic goniometer, and, for comparison, a passive-marker-based motion tracker. The data generated by fusing the Kinect and goniometer recordings were used for training two long short-term memory (LSTM) RNNs. The input to one RNN included both the Kinect and the goniometer data, and the input to the second RNN included only Kinect data. The performance of the networks was compared to the performance of a tracker based on a Kalman filter and to the raw Kinect measurements. The accuracy of the fused data was high, and it considerably improved data accuracy. The accuracy for both trackers was high, and both were more accurate than the Kalman filter tracker and the raw Kinect measurements. The developed methods are suitable for integration with immersive VR rehabilitation systems in the clinic and the home environments.</description><subject>kinematics</subject><subject>recurrent neural network (RNN)</subject><subject>rehabilitation</subject><subject>Rehabilitation Sciences</subject><subject>upper limb</subject><subject>virtual reality</subject><issn>2673-6861</issn><issn>2673-6861</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpVks1u1DAUhSMEolXpA7BBXrJopv6JHWeDhEqBSpW6gbXlXF9PXRJ7sJNWXfEqPAtPRmamrdqVr6_P-a5ln6p6z-hKCN2d-owFVpxyvuqkEpq9qg65akWttGKvn9UH1XEpN5RSLplopX5bHQjFtO4acVj9OY_XNkKIazKmKaRIpmzh13ZvAealvifJE0uGdFdDKhMRX8htcJhIwVhSJnPZiUkf0oiwwALYYYE5HE4eNX4RpXhCbHTEIW7-_R3Q5rgY31VvvB0KHj-sR9XPr-c_zr7Xl1ffLs4-X9bQKDnVHhi2wDSjwFrXSueccNJpTVumaSNYpzwyjQK1siCsUI3vmQTqldbOojiqLvZcl-yN2eQw2nxvkg1m10h5bWyeAgxoLOqe6QZB9k3jmLTQcPDaSaYcR94srE971mbuR3SAcXmz4QX05UkM12adbk0nurYTbAF8fADk9HvGMpkxFMBhsBHTXAxvOe14q9R2FttLIadSMvqnMYyabQ7MLgdmmwOzz8Hi-fD8fk-Ox18X_wEqfrLQ</recordid><startdate>20220816</startdate><enddate>20220816</enddate><creator>Agami, Shahar</creator><creator>Riemer, Raziel</creator><creator>Berman, Sigal</creator><general>Frontiers Media S.A</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20220816</creationdate><title>Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning</title><author>Agami, Shahar ; Riemer, Raziel ; Berman, Sigal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>kinematics</topic><topic>recurrent neural network (RNN)</topic><topic>rehabilitation</topic><topic>Rehabilitation Sciences</topic><topic>upper limb</topic><topic>virtual reality</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Agami, Shahar</creatorcontrib><creatorcontrib>Riemer, Raziel</creatorcontrib><creatorcontrib>Berman, Sigal</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Frontiers in rehabilitation sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Agami, Shahar</au><au>Riemer, Raziel</au><au>Berman, Sigal</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning</atitle><jtitle>Frontiers in rehabilitation sciences</jtitle><addtitle>Front Rehabil Sci</addtitle><date>2022-08-16</date><risdate>2022</risdate><volume>3</volume><spage>956381</spage><pages>956381-</pages><issn>2673-6861</issn><eissn>2673-6861</eissn><abstract>Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neural network (RNN). Yet, training an RNN requires a considerable amount of relevant and accurate training data. Training databases can be obtained using gold-standard motion tracking sensors. This limits the use of the RNN trackers in environments and tasks that lack accessibility to gold-standard sensors. Digital goniometers are typically cheaper, more portable, and simpler to use than gold-standard motion tracking sensors. The current work suggests a method for generating accurate skeleton data suitable for training an RNN motion tracker based on the offline fusion of a Kinect 3D video sensor and an electronic goniometer. The fusion applies nonlinear constraint optimization, where the constraints are based on an advanced shoulder-centered kinematic model of the arm. The model builds on the representation of the arm as a triangle (the arm triangle). The shoulder-centered representation of the arm triangle motion simplifies constraint representation and consequently the optimization problem. To test the performance of the offline fusion and the RNN trained using the optimized data, arm motion of eight participants was recorded using a Kinect sensor, an electronic goniometer, and, for comparison, a passive-marker-based motion tracker. The data generated by fusing the Kinect and goniometer recordings were used for training two long short-term memory (LSTM) RNNs. The input to one RNN included both the Kinect and the goniometer data, and the input to the second RNN included only Kinect data. The performance of the networks was compared to the performance of a tracker based on a Kalman filter and to the raw Kinect measurements. The accuracy of the fused data was high, and it considerably improved data accuracy. The accuracy for both trackers was high, and both were more accurate than the Kalman filter tracker and the raw Kinect measurements. The developed methods are suitable for integration with immersive VR rehabilitation systems in the clinic and the home environments.</abstract><cop>Switzerland</cop><pub>Frontiers Media S.A</pub><pmid>36188943</pmid><doi>10.3389/fresc.2022.956381</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2673-6861
ispartof Frontiers in rehabilitation sciences, 2022-08, Vol.3, p.956381
issn 2673-6861
2673-6861
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_ae8b184ec5b44d15ac42cf8d516d2e24
source PubMed Central
subjects kinematics
recurrent neural network (RNN)
rehabilitation
Rehabilitation Sciences
upper limb
virtual reality
title Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T14%3A03%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhancing%20motion%20tracking%20accuracy%20of%20a%20low-cost%203D%20video%20sensor%20using%20a%20biomechanical%20model,%20sensor%20fusion,%20and%20deep%C2%A0learning&rft.jtitle=Frontiers%20in%20rehabilitation%20sciences&rft.au=Agami,%20Shahar&rft.date=2022-08-16&rft.volume=3&rft.spage=956381&rft.pages=956381-&rft.issn=2673-6861&rft.eissn=2673-6861&rft_id=info:doi/10.3389/fresc.2022.956381&rft_dat=%3Cproquest_doaj_%3E2720927664%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c465t-fc1e7c1810c17d75ddd3d5d880718043196fe18e3e86ac3a364fb15c0f688dae3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2720927664&rft_id=info:pmid/36188943&rfr_iscdi=true