Loading…

Visual-Inertial Fusion-Based Human Pose Estimation: A Review

Human pose estimation provides valuable information for biomedical research on human movement and applications such as entertainment and physical exercise. The fusion of visual and inertial data has been increasingly studied in the past two decades to take advantage of these two naturally complement...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2023-01, Vol.72, p.1-1
Main Authors: Li, Tong, Yu, Haoyong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3
cites cdi_FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3
container_end_page 1
container_issue
container_start_page 1
container_title IEEE transactions on instrumentation and measurement
container_volume 72
creator Li, Tong
Yu, Haoyong
description Human pose estimation provides valuable information for biomedical research on human movement and applications such as entertainment and physical exercise. The fusion of visual and inertial data has been increasingly studied in the past two decades to take advantage of these two naturally complementary sensing modalities. In this paper, we systematically reviewed the advances in visual-inertial fusion-based human pose estimation with a thorough search for related studies in five mainstream literature databases. A total of 54 studies were identified and included by screening 4586 records retrieved in the review process. The estimation targets, hardware design, fusion methods, evaluation metrics, and system accuracy of these included studies were summarized and categorized for analysis. From these state-of-the-art studies, challenges in terms of mobility, calibration, real-time estimation, and evaluation methods are further discussed in depth and possible directions to overcome these issues are recommended. We expect this systematic review can provide researchers and engineers with a thorough idea of the progress and performance in visual-inertial fusion-based human pose estimation. We also hope that the discussions on challenges and possible future directions can facilitate future work to improve such systems and promote their applications in real life.
doi_str_mv 10.1109/TIM.2023.3286000
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIM_2023_3286000</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10152492</ieee_id><sourcerecordid>2830415584</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3</originalsourceid><addsrcrecordid>eNpNkEtLw0AQgBdRsFbvHjwEPG_d90O81NLaQkWR6nXZJBNIaZO6m1T8925pD54GZr55fQjdUjKilNiH1eJ1xAjjI86MIoScoQGVUmOrFDtHA0KowVZIdYmuYlwnQCuhB-jpq4693-BFA6Gr_Sab9bFuG_zsI5TZvN_6JntvI2TT2NVb36XaYzbOPmBfw881uqj8JsLNKQ7R52y6mszx8u1lMRkvccEs63CpfE7ASgFC5l5B4SvOpQaqcqhymRJKFV4zY0tvtC0BSGFyQ63SZVlI4EN0f5y7C-13D7Fz67YPTVrpmOFEpEeNSBQ5UkVoYwxQuV1IJ4dfR4k7OHLJkTs4cidHqeXu2FIDwD-cSiYs439TL2Io</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2830415584</pqid></control><display><type>article</type><title>Visual-Inertial Fusion-Based Human Pose Estimation: A Review</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Li, Tong ; Yu, Haoyong</creator><creatorcontrib>Li, Tong ; Yu, Haoyong</creatorcontrib><description>Human pose estimation provides valuable information for biomedical research on human movement and applications such as entertainment and physical exercise. The fusion of visual and inertial data has been increasingly studied in the past two decades to take advantage of these two naturally complementary sensing modalities. In this paper, we systematically reviewed the advances in visual-inertial fusion-based human pose estimation with a thorough search for related studies in five mainstream literature databases. A total of 54 studies were identified and included by screening 4586 records retrieved in the review process. The estimation targets, hardware design, fusion methods, evaluation metrics, and system accuracy of these included studies were summarized and categorized for analysis. From these state-of-the-art studies, challenges in terms of mobility, calibration, real-time estimation, and evaluation methods are further discussed in depth and possible directions to overcome these issues are recommended. We expect this systematic review can provide researchers and engineers with a thorough idea of the progress and performance in visual-inertial fusion-based human pose estimation. We also hope that the discussions on challenges and possible future directions can facilitate future work to improve such systems and promote their applications in real life.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2023.3286000</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Cameras ; computer vision ; Elbow ; Human motion ; Inertial fusion (reactor) ; Magnetic field measurement ; motion capture ; Physical exercise ; Pose estimation ; Robot sensing systems ; sensor fusion ; State-of-the-art reviews ; Thigh ; visual-inertial ; Wrist</subject><ispartof>IEEE transactions on instrumentation and measurement, 2023-01, Vol.72, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3</citedby><cites>FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3</cites><orcidid>0000-0002-9876-4863 ; 0000-0003-1735-9055</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10152492$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Li, Tong</creatorcontrib><creatorcontrib>Yu, Haoyong</creatorcontrib><title>Visual-Inertial Fusion-Based Human Pose Estimation: A Review</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Human pose estimation provides valuable information for biomedical research on human movement and applications such as entertainment and physical exercise. The fusion of visual and inertial data has been increasingly studied in the past two decades to take advantage of these two naturally complementary sensing modalities. In this paper, we systematically reviewed the advances in visual-inertial fusion-based human pose estimation with a thorough search for related studies in five mainstream literature databases. A total of 54 studies were identified and included by screening 4586 records retrieved in the review process. The estimation targets, hardware design, fusion methods, evaluation metrics, and system accuracy of these included studies were summarized and categorized for analysis. From these state-of-the-art studies, challenges in terms of mobility, calibration, real-time estimation, and evaluation methods are further discussed in depth and possible directions to overcome these issues are recommended. We expect this systematic review can provide researchers and engineers with a thorough idea of the progress and performance in visual-inertial fusion-based human pose estimation. We also hope that the discussions on challenges and possible future directions can facilitate future work to improve such systems and promote their applications in real life.</description><subject>Cameras</subject><subject>computer vision</subject><subject>Elbow</subject><subject>Human motion</subject><subject>Inertial fusion (reactor)</subject><subject>Magnetic field measurement</subject><subject>motion capture</subject><subject>Physical exercise</subject><subject>Pose estimation</subject><subject>Robot sensing systems</subject><subject>sensor fusion</subject><subject>State-of-the-art reviews</subject><subject>Thigh</subject><subject>visual-inertial</subject><subject>Wrist</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpNkEtLw0AQgBdRsFbvHjwEPG_d90O81NLaQkWR6nXZJBNIaZO6m1T8925pD54GZr55fQjdUjKilNiH1eJ1xAjjI86MIoScoQGVUmOrFDtHA0KowVZIdYmuYlwnQCuhB-jpq4693-BFA6Gr_Sab9bFuG_zsI5TZvN_6JntvI2TT2NVb36XaYzbOPmBfw881uqj8JsLNKQ7R52y6mszx8u1lMRkvccEs63CpfE7ASgFC5l5B4SvOpQaqcqhymRJKFV4zY0tvtC0BSGFyQ63SZVlI4EN0f5y7C-13D7Fz67YPTVrpmOFEpEeNSBQ5UkVoYwxQuV1IJ4dfR4k7OHLJkTs4cidHqeXu2FIDwD-cSiYs439TL2Io</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Li, Tong</creator><creator>Yu, Haoyong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-9876-4863</orcidid><orcidid>https://orcid.org/0000-0003-1735-9055</orcidid></search><sort><creationdate>20230101</creationdate><title>Visual-Inertial Fusion-Based Human Pose Estimation: A Review</title><author>Li, Tong ; Yu, Haoyong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Cameras</topic><topic>computer vision</topic><topic>Elbow</topic><topic>Human motion</topic><topic>Inertial fusion (reactor)</topic><topic>Magnetic field measurement</topic><topic>motion capture</topic><topic>Physical exercise</topic><topic>Pose estimation</topic><topic>Robot sensing systems</topic><topic>sensor fusion</topic><topic>State-of-the-art reviews</topic><topic>Thigh</topic><topic>visual-inertial</topic><topic>Wrist</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Tong</creatorcontrib><creatorcontrib>Yu, Haoyong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEL</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Tong</au><au>Yu, Haoyong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visual-Inertial Fusion-Based Human Pose Estimation: A Review</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>72</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Human pose estimation provides valuable information for biomedical research on human movement and applications such as entertainment and physical exercise. The fusion of visual and inertial data has been increasingly studied in the past two decades to take advantage of these two naturally complementary sensing modalities. In this paper, we systematically reviewed the advances in visual-inertial fusion-based human pose estimation with a thorough search for related studies in five mainstream literature databases. A total of 54 studies were identified and included by screening 4586 records retrieved in the review process. The estimation targets, hardware design, fusion methods, evaluation metrics, and system accuracy of these included studies were summarized and categorized for analysis. From these state-of-the-art studies, challenges in terms of mobility, calibration, real-time estimation, and evaluation methods are further discussed in depth and possible directions to overcome these issues are recommended. We expect this systematic review can provide researchers and engineers with a thorough idea of the progress and performance in visual-inertial fusion-based human pose estimation. We also hope that the discussions on challenges and possible future directions can facilitate future work to improve such systems and promote their applications in real life.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2023.3286000</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-9876-4863</orcidid><orcidid>https://orcid.org/0000-0003-1735-9055</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0018-9456
ispartof IEEE transactions on instrumentation and measurement, 2023-01, Vol.72, p.1-1
issn 0018-9456
1557-9662
language eng
recordid cdi_crossref_primary_10_1109_TIM_2023_3286000
source IEEE Electronic Library (IEL) Journals
subjects Cameras
computer vision
Elbow
Human motion
Inertial fusion (reactor)
Magnetic field measurement
motion capture
Physical exercise
Pose estimation
Robot sensing systems
sensor fusion
State-of-the-art reviews
Thigh
visual-inertial
Wrist
title Visual-Inertial Fusion-Based Human Pose Estimation: A Review
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T13%3A56%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visual-Inertial%20Fusion-Based%20Human%20Pose%20Estimation:%20A%20Review&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Li,%20Tong&rft.date=2023-01-01&rft.volume=72&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2023.3286000&rft_dat=%3Cproquest_cross%3E2830415584%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c292t-d6ab0e954e45ba6ecaf3357e16befb56ec66ca7289da879dee0c8b81967ddc5e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2830415584&rft_id=info:pmid/&rft_ieee_id=10152492&rfr_iscdi=true