Loading…

Synthesizing 3D Gait Data with Personalized Walking Style and Appearance

Extracting gait biometrics from videos has been receiving rocketing attention given its applications, such as person re-identification. Although deep learning arises as a promising solution to improve the accuracy of most gait recognition algorithms, the lack of enough training data becomes a bottle...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences 2023-02, Vol.13 (4), p.2084
Main Authors: Cheng, Yao, Zhang, Guichao, Huang, Sifei, Wang, Zexi, Cheng, Xuan, Lin, Juncong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c361t-6e7fa565210b6e1942b9b7b07428f3c7f78e87038a2c596367204ea69b8df21b3
container_end_page
container_issue 4
container_start_page 2084
container_title Applied sciences
container_volume 13
creator Cheng, Yao
Zhang, Guichao
Huang, Sifei
Wang, Zexi
Cheng, Xuan
Lin, Juncong
description Extracting gait biometrics from videos has been receiving rocketing attention given its applications, such as person re-identification. Although deep learning arises as a promising solution to improve the accuracy of most gait recognition algorithms, the lack of enough training data becomes a bottleneck. One of the solutions to address data deficiency is to generate synthetic data. However, gait data synthesis is particularly challenging as the inter-subject and intra-subject variations of walking style need to be carefully balanced. In this paper, we propose a complete 3D framework to synthesize unlimited, realistic, and diverse motion data. In addition to walking speed and lighting conditions, we emphasize two key factors: 3D gait motion style and character appearance. Benefiting from its 3D nature, our system can provide various gait-related data, such as accelerometer data and depth map, not limited to silhouettes. We conducted various experiments using the off-the-shelf gait recognition algorithm and draw the following conclusions: (1) the real-to-virtual gap can be closed when adding a small portion of real-world data to a synthetically trained recognizer; (2) the amount of real training data needed to train competitive gait recognition systems can be reduced significantly; (3) the rich variations in gait data are helpful for investigating algorithm performance under different conditions. The synthetic data generator, as well as all experiments, will be made publicly available.
doi_str_mv 10.3390/app13042084
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_3ca6df37947a45a180e963cf3992ef05</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A751990967</galeid><doaj_id>oai_doaj_org_article_3ca6df37947a45a180e963cf3992ef05</doaj_id><sourcerecordid>A751990967</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-6e7fa565210b6e1942b9b7b07428f3c7f78e87038a2c596367204ea69b8df21b3</originalsourceid><addsrcrecordid>eNpNkU1LBDEMhgdRUNSTf2DAo6z2Y6adHpddXQVBQcVjyXTStes4HduK7P56qytickh4efOQkKI4oeScc0UuYBwpJxUjTbVTHDAixYRXVO7-6_eL4xhXJIeivKHkoLh-WA_pBaPbuGFZ8nm5AJfKOSQoP116Ke8xRD9A7zbYlc_Qv37bHtK6xxKGrpyOI0KAweBRsWehj3j8Ww-Lp6vLx9n15PZucTOb3k4MFzRNBEoLtagZJa1AqirWqla2RFassdxIKxtsJOENMFMrwYVkpEIQqm06y2jLD4ubLbfzsNJjcG8Q1tqD0z-CD0sNITnTo-YGRGe5VJWEqgbaEMxEY7lSDC2pM-t0yxqDf__AmPTKf4R8bdRMSlUzQRnPrvOtawkZ6gbrUwCTs8M3Z_yA1mV9KmuqFFFC5oGz7YAJPsaA9m9NSvT3q_S_V_EvX4-DQw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2779526123</pqid></control><display><type>article</type><title>Synthesizing 3D Gait Data with Personalized Walking Style and Appearance</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Cheng, Yao ; Zhang, Guichao ; Huang, Sifei ; Wang, Zexi ; Cheng, Xuan ; Lin, Juncong</creator><creatorcontrib>Cheng, Yao ; Zhang, Guichao ; Huang, Sifei ; Wang, Zexi ; Cheng, Xuan ; Lin, Juncong</creatorcontrib><description>Extracting gait biometrics from videos has been receiving rocketing attention given its applications, such as person re-identification. Although deep learning arises as a promising solution to improve the accuracy of most gait recognition algorithms, the lack of enough training data becomes a bottleneck. One of the solutions to address data deficiency is to generate synthetic data. However, gait data synthesis is particularly challenging as the inter-subject and intra-subject variations of walking style need to be carefully balanced. In this paper, we propose a complete 3D framework to synthesize unlimited, realistic, and diverse motion data. In addition to walking speed and lighting conditions, we emphasize two key factors: 3D gait motion style and character appearance. Benefiting from its 3D nature, our system can provide various gait-related data, such as accelerometer data and depth map, not limited to silhouettes. We conducted various experiments using the off-the-shelf gait recognition algorithm and draw the following conclusions: (1) the real-to-virtual gap can be closed when adding a small portion of real-world data to a synthetically trained recognizer; (2) the amount of real training data needed to train competitive gait recognition systems can be reduced significantly; (3) the rich variations in gait data are helpful for investigating algorithm performance under different conditions. The synthetic data generator, as well as all experiments, will be made publicly available.</description><identifier>ISSN: 2076-3417</identifier><identifier>EISSN: 2076-3417</identifier><identifier>DOI: 10.3390/app13042084</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>3-D graphics ; 3D gait data ; Accelerometers ; Algorithms ; Biometric identification ; Biometrics ; Biometry ; Data collection ; Datasets ; Deep learning ; Gait ; Gait recognition ; gait synthesis ; human motion data ; Methods ; neural network ; Neural networks ; Surveillance ; Three dimensional motion ; Walking</subject><ispartof>Applied sciences, 2023-02, Vol.13 (4), p.2084</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c361t-6e7fa565210b6e1942b9b7b07428f3c7f78e87038a2c596367204ea69b8df21b3</cites><orcidid>0000-0001-6500-6655 ; 0000-0002-7382-0240</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2779526123/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2779526123?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Cheng, Yao</creatorcontrib><creatorcontrib>Zhang, Guichao</creatorcontrib><creatorcontrib>Huang, Sifei</creatorcontrib><creatorcontrib>Wang, Zexi</creatorcontrib><creatorcontrib>Cheng, Xuan</creatorcontrib><creatorcontrib>Lin, Juncong</creatorcontrib><title>Synthesizing 3D Gait Data with Personalized Walking Style and Appearance</title><title>Applied sciences</title><description>Extracting gait biometrics from videos has been receiving rocketing attention given its applications, such as person re-identification. Although deep learning arises as a promising solution to improve the accuracy of most gait recognition algorithms, the lack of enough training data becomes a bottleneck. One of the solutions to address data deficiency is to generate synthetic data. However, gait data synthesis is particularly challenging as the inter-subject and intra-subject variations of walking style need to be carefully balanced. In this paper, we propose a complete 3D framework to synthesize unlimited, realistic, and diverse motion data. In addition to walking speed and lighting conditions, we emphasize two key factors: 3D gait motion style and character appearance. Benefiting from its 3D nature, our system can provide various gait-related data, such as accelerometer data and depth map, not limited to silhouettes. We conducted various experiments using the off-the-shelf gait recognition algorithm and draw the following conclusions: (1) the real-to-virtual gap can be closed when adding a small portion of real-world data to a synthetically trained recognizer; (2) the amount of real training data needed to train competitive gait recognition systems can be reduced significantly; (3) the rich variations in gait data are helpful for investigating algorithm performance under different conditions. The synthetic data generator, as well as all experiments, will be made publicly available.</description><subject>3-D graphics</subject><subject>3D gait data</subject><subject>Accelerometers</subject><subject>Algorithms</subject><subject>Biometric identification</subject><subject>Biometrics</subject><subject>Biometry</subject><subject>Data collection</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Gait</subject><subject>Gait recognition</subject><subject>gait synthesis</subject><subject>human motion data</subject><subject>Methods</subject><subject>neural network</subject><subject>Neural networks</subject><subject>Surveillance</subject><subject>Three dimensional motion</subject><subject>Walking</subject><issn>2076-3417</issn><issn>2076-3417</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNkU1LBDEMhgdRUNSTf2DAo6z2Y6adHpddXQVBQcVjyXTStes4HduK7P56qytickh4efOQkKI4oeScc0UuYBwpJxUjTbVTHDAixYRXVO7-6_eL4xhXJIeivKHkoLh-WA_pBaPbuGFZ8nm5AJfKOSQoP116Ke8xRD9A7zbYlc_Qv37bHtK6xxKGrpyOI0KAweBRsWehj3j8Ww-Lp6vLx9n15PZucTOb3k4MFzRNBEoLtagZJa1AqirWqla2RFassdxIKxtsJOENMFMrwYVkpEIQqm06y2jLD4ubLbfzsNJjcG8Q1tqD0z-CD0sNITnTo-YGRGe5VJWEqgbaEMxEY7lSDC2pM-t0yxqDf__AmPTKf4R8bdRMSlUzQRnPrvOtawkZ6gbrUwCTs8M3Z_yA1mV9KmuqFFFC5oGz7YAJPsaA9m9NSvT3q_S_V_EvX4-DQw</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Cheng, Yao</creator><creator>Zhang, Guichao</creator><creator>Huang, Sifei</creator><creator>Wang, Zexi</creator><creator>Cheng, Xuan</creator><creator>Lin, Juncong</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-6500-6655</orcidid><orcidid>https://orcid.org/0000-0002-7382-0240</orcidid></search><sort><creationdate>20230201</creationdate><title>Synthesizing 3D Gait Data with Personalized Walking Style and Appearance</title><author>Cheng, Yao ; Zhang, Guichao ; Huang, Sifei ; Wang, Zexi ; Cheng, Xuan ; Lin, Juncong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-6e7fa565210b6e1942b9b7b07428f3c7f78e87038a2c596367204ea69b8df21b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3-D graphics</topic><topic>3D gait data</topic><topic>Accelerometers</topic><topic>Algorithms</topic><topic>Biometric identification</topic><topic>Biometrics</topic><topic>Biometry</topic><topic>Data collection</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Gait</topic><topic>Gait recognition</topic><topic>gait synthesis</topic><topic>human motion data</topic><topic>Methods</topic><topic>neural network</topic><topic>Neural networks</topic><topic>Surveillance</topic><topic>Three dimensional motion</topic><topic>Walking</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cheng, Yao</creatorcontrib><creatorcontrib>Zhang, Guichao</creatorcontrib><creatorcontrib>Huang, Sifei</creatorcontrib><creatorcontrib>Wang, Zexi</creatorcontrib><creatorcontrib>Cheng, Xuan</creatorcontrib><creatorcontrib>Lin, Juncong</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Applied sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cheng, Yao</au><au>Zhang, Guichao</au><au>Huang, Sifei</au><au>Wang, Zexi</au><au>Cheng, Xuan</au><au>Lin, Juncong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Synthesizing 3D Gait Data with Personalized Walking Style and Appearance</atitle><jtitle>Applied sciences</jtitle><date>2023-02-01</date><risdate>2023</risdate><volume>13</volume><issue>4</issue><spage>2084</spage><pages>2084-</pages><issn>2076-3417</issn><eissn>2076-3417</eissn><abstract>Extracting gait biometrics from videos has been receiving rocketing attention given its applications, such as person re-identification. Although deep learning arises as a promising solution to improve the accuracy of most gait recognition algorithms, the lack of enough training data becomes a bottleneck. One of the solutions to address data deficiency is to generate synthetic data. However, gait data synthesis is particularly challenging as the inter-subject and intra-subject variations of walking style need to be carefully balanced. In this paper, we propose a complete 3D framework to synthesize unlimited, realistic, and diverse motion data. In addition to walking speed and lighting conditions, we emphasize two key factors: 3D gait motion style and character appearance. Benefiting from its 3D nature, our system can provide various gait-related data, such as accelerometer data and depth map, not limited to silhouettes. We conducted various experiments using the off-the-shelf gait recognition algorithm and draw the following conclusions: (1) the real-to-virtual gap can be closed when adding a small portion of real-world data to a synthetically trained recognizer; (2) the amount of real training data needed to train competitive gait recognition systems can be reduced significantly; (3) the rich variations in gait data are helpful for investigating algorithm performance under different conditions. The synthetic data generator, as well as all experiments, will be made publicly available.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/app13042084</doi><orcidid>https://orcid.org/0000-0001-6500-6655</orcidid><orcidid>https://orcid.org/0000-0002-7382-0240</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2076-3417
ispartof Applied sciences, 2023-02, Vol.13 (4), p.2084
issn 2076-3417
2076-3417
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_3ca6df37947a45a180e963cf3992ef05
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects 3-D graphics
3D gait data
Accelerometers
Algorithms
Biometric identification
Biometrics
Biometry
Data collection
Datasets
Deep learning
Gait
Gait recognition
gait synthesis
human motion data
Methods
neural network
Neural networks
Surveillance
Three dimensional motion
Walking
title Synthesizing 3D Gait Data with Personalized Walking Style and Appearance
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T12%3A12%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Synthesizing%203D%20Gait%20Data%20with%20Personalized%20Walking%20Style%20and%20Appearance&rft.jtitle=Applied%20sciences&rft.au=Cheng,%20Yao&rft.date=2023-02-01&rft.volume=13&rft.issue=4&rft.spage=2084&rft.pages=2084-&rft.issn=2076-3417&rft.eissn=2076-3417&rft_id=info:doi/10.3390/app13042084&rft_dat=%3Cgale_doaj_%3EA751990967%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c361t-6e7fa565210b6e1942b9b7b07428f3c7f78e87038a2c596367204ea69b8df21b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2779526123&rft_id=info:pmid/&rft_galeid=A751990967&rfr_iscdi=true