Loading…

MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey

Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especiall...

Full description

Saved in:
Bibliographic Details
Main Authors: Li, Chu-xi, Yang, Chi, Li, Ye-rong, Feng, Shufei, Zhang, Zhen, Wang, Yuhao, Liu, Qiegen, Xiao, Xiao, Xiong, Shisheng
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1284
container_issue
container_start_page 1280
container_title
container_volume
creator Li, Chu-xi
Yang, Chi
Li, Ye-rong
Feng, Shufei
Zhang, Zhen
Wang, Yuhao
Liu, Qiegen
Xiao, Xiao
Xiong, Shisheng
description Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especially for automated analysis in non-human primate studies. The OpenMonkeyStudio as a 3D toolbox is available to estimate the pose of an unmarked monkey. However, there is still a lack of the 2D method, since most of laboratories are using a single front camera to obtain 2D videos due to financial constraints. Here, we build a bone- recognition auxiliary tool called MonkeyPosekit, which is based on deep learning for automatically capturing the stream information from 2D videos without the need for external hardware assistance. The MonkeyPosekit is able to identify the monkey's activity space to track 13 bone joint points for the behavioral testing. Futhermore, we propose a novel approach of data augmentation called CageAUG to overcome the occlusion issues in this study. Equipped with the CageAUG augmentation, the accuracy has reached 98.8% in Open Monkey dataset by using the High-Resolution network.
doi_str_mv 10.1109/CAC53003.2021.9727703
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9727703</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9727703</ieee_id><sourcerecordid>9727703</sourcerecordid><originalsourceid>FETCH-LOGICAL-i118t-c476da78ae1c7eae186f773fb014124a9ff7c95970672c2275bda238eb5718943</originalsourceid><addsrcrecordid>eNotT8tOwzAQNEhIlNIvQEj-gYT1-rE2J6pQHlIrOMC5cpKNFFIaFIdD_56g9jJzmIdmhLhVkCsF4a5YFlYD6BwBVR4IiUCfiSvlnDXoDMG5mKHzPoOg_aVYpPQFAKiVsQZm4mHT7zs-vPeJu3a8l8vfsf-OI9dyE4eOhx2nJPFR_hvkKo3tJLb9XvaNPCavxUUTd4kXJ56Lz6fVR_GSrd-eX4vlOmuV8mNWGXJ1JB9ZVcQTetcQ6aYEZRSaGJqGqmADgSOsEMmWdUTtubSkfDB6Lm6OvS0zb3-Gachw2J4O6z8_kUlY</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey</title><source>IEEE Xplore All Conference Series</source><creator>Li, Chu-xi ; Yang, Chi ; Li, Ye-rong ; Feng, Shufei ; Zhang, Zhen ; Wang, Yuhao ; Liu, Qiegen ; Xiao, Xiao ; Xiong, Shisheng</creator><creatorcontrib>Li, Chu-xi ; Yang, Chi ; Li, Ye-rong ; Feng, Shufei ; Zhang, Zhen ; Wang, Yuhao ; Liu, Qiegen ; Xiao, Xiao ; Xiong, Shisheng</creatorcontrib><description>Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especially for automated analysis in non-human primate studies. The OpenMonkeyStudio as a 3D toolbox is available to estimate the pose of an unmarked monkey. However, there is still a lack of the 2D method, since most of laboratories are using a single front camera to obtain 2D videos due to financial constraints. Here, we build a bone- recognition auxiliary tool called MonkeyPosekit, which is based on deep learning for automatically capturing the stream information from 2D videos without the need for external hardware assistance. The MonkeyPosekit is able to identify the monkey's activity space to track 13 bone joint points for the behavioral testing. Futhermore, we propose a novel approach of data augmentation called CageAUG to overcome the occlusion issues in this study. Equipped with the CageAUG augmentation, the accuracy has reached 98.8% in Open Monkey dataset by using the High-Resolution network.</description><identifier>EISSN: 2688-0938</identifier><identifier>EISBN: 1665426470</identifier><identifier>EISBN: 9781665426473</identifier><identifier>DOI: 10.1109/CAC53003.2021.9727703</identifier><language>eng</language><publisher>IEEE</publisher><subject>Bone recognition ; Bones ; data augmentation ; Deep learning ; Hardware ; High- Resolution network ; Laboratories ; Manuals ; non-human primates ; Pose estimation ; Three-dimensional displays</subject><ispartof>2021 China Automation Congress (CAC), 2021, p.1280-1284</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9727703$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9727703$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Chu-xi</creatorcontrib><creatorcontrib>Yang, Chi</creatorcontrib><creatorcontrib>Li, Ye-rong</creatorcontrib><creatorcontrib>Feng, Shufei</creatorcontrib><creatorcontrib>Zhang, Zhen</creatorcontrib><creatorcontrib>Wang, Yuhao</creatorcontrib><creatorcontrib>Liu, Qiegen</creatorcontrib><creatorcontrib>Xiao, Xiao</creatorcontrib><creatorcontrib>Xiong, Shisheng</creatorcontrib><title>MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey</title><title>2021 China Automation Congress (CAC)</title><addtitle>CAC</addtitle><description>Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especially for automated analysis in non-human primate studies. The OpenMonkeyStudio as a 3D toolbox is available to estimate the pose of an unmarked monkey. However, there is still a lack of the 2D method, since most of laboratories are using a single front camera to obtain 2D videos due to financial constraints. Here, we build a bone- recognition auxiliary tool called MonkeyPosekit, which is based on deep learning for automatically capturing the stream information from 2D videos without the need for external hardware assistance. The MonkeyPosekit is able to identify the monkey's activity space to track 13 bone joint points for the behavioral testing. Futhermore, we propose a novel approach of data augmentation called CageAUG to overcome the occlusion issues in this study. Equipped with the CageAUG augmentation, the accuracy has reached 98.8% in Open Monkey dataset by using the High-Resolution network.</description><subject>Bone recognition</subject><subject>Bones</subject><subject>data augmentation</subject><subject>Deep learning</subject><subject>Hardware</subject><subject>High- Resolution network</subject><subject>Laboratories</subject><subject>Manuals</subject><subject>non-human primates</subject><subject>Pose estimation</subject><subject>Three-dimensional displays</subject><issn>2688-0938</issn><isbn>1665426470</isbn><isbn>9781665426473</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotT8tOwzAQNEhIlNIvQEj-gYT1-rE2J6pQHlIrOMC5cpKNFFIaFIdD_56g9jJzmIdmhLhVkCsF4a5YFlYD6BwBVR4IiUCfiSvlnDXoDMG5mKHzPoOg_aVYpPQFAKiVsQZm4mHT7zs-vPeJu3a8l8vfsf-OI9dyE4eOhx2nJPFR_hvkKo3tJLb9XvaNPCavxUUTd4kXJ56Lz6fVR_GSrd-eX4vlOmuV8mNWGXJ1JB9ZVcQTetcQ6aYEZRSaGJqGqmADgSOsEMmWdUTtubSkfDB6Lm6OvS0zb3-Gachw2J4O6z8_kUlY</recordid><startdate>20211022</startdate><enddate>20211022</enddate><creator>Li, Chu-xi</creator><creator>Yang, Chi</creator><creator>Li, Ye-rong</creator><creator>Feng, Shufei</creator><creator>Zhang, Zhen</creator><creator>Wang, Yuhao</creator><creator>Liu, Qiegen</creator><creator>Xiao, Xiao</creator><creator>Xiong, Shisheng</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20211022</creationdate><title>MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey</title><author>Li, Chu-xi ; Yang, Chi ; Li, Ye-rong ; Feng, Shufei ; Zhang, Zhen ; Wang, Yuhao ; Liu, Qiegen ; Xiao, Xiao ; Xiong, Shisheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i118t-c476da78ae1c7eae186f773fb014124a9ff7c95970672c2275bda238eb5718943</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Bone recognition</topic><topic>Bones</topic><topic>data augmentation</topic><topic>Deep learning</topic><topic>Hardware</topic><topic>High- Resolution network</topic><topic>Laboratories</topic><topic>Manuals</topic><topic>non-human primates</topic><topic>Pose estimation</topic><topic>Three-dimensional displays</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Chu-xi</creatorcontrib><creatorcontrib>Yang, Chi</creatorcontrib><creatorcontrib>Li, Ye-rong</creatorcontrib><creatorcontrib>Feng, Shufei</creatorcontrib><creatorcontrib>Zhang, Zhen</creatorcontrib><creatorcontrib>Wang, Yuhao</creatorcontrib><creatorcontrib>Liu, Qiegen</creatorcontrib><creatorcontrib>Xiao, Xiao</creatorcontrib><creatorcontrib>Xiong, Shisheng</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Chu-xi</au><au>Yang, Chi</au><au>Li, Ye-rong</au><au>Feng, Shufei</au><au>Zhang, Zhen</au><au>Wang, Yuhao</au><au>Liu, Qiegen</au><au>Xiao, Xiao</au><au>Xiong, Shisheng</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey</atitle><btitle>2021 China Automation Congress (CAC)</btitle><stitle>CAC</stitle><date>2021-10-22</date><risdate>2021</risdate><spage>1280</spage><epage>1284</epage><pages>1280-1284</pages><eissn>2688-0938</eissn><eisbn>1665426470</eisbn><eisbn>9781665426473</eisbn><abstract>Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especially for automated analysis in non-human primate studies. The OpenMonkeyStudio as a 3D toolbox is available to estimate the pose of an unmarked monkey. However, there is still a lack of the 2D method, since most of laboratories are using a single front camera to obtain 2D videos due to financial constraints. Here, we build a bone- recognition auxiliary tool called MonkeyPosekit, which is based on deep learning for automatically capturing the stream information from 2D videos without the need for external hardware assistance. The MonkeyPosekit is able to identify the monkey's activity space to track 13 bone joint points for the behavioral testing. Futhermore, we propose a novel approach of data augmentation called CageAUG to overcome the occlusion issues in this study. Equipped with the CageAUG augmentation, the accuracy has reached 98.8% in Open Monkey dataset by using the High-Resolution network.</abstract><pub>IEEE</pub><doi>10.1109/CAC53003.2021.9727703</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2688-0938
ispartof 2021 China Automation Congress (CAC), 2021, p.1280-1284
issn 2688-0938
language eng
recordid cdi_ieee_primary_9727703
source IEEE Xplore All Conference Series
subjects Bone recognition
Bones
data augmentation
Deep learning
Hardware
High- Resolution network
Laboratories
Manuals
non-human primates
Pose estimation
Three-dimensional displays
title MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T19%3A30%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=MonkeyPosekit:%20Automated%20Markerless%202D%20Pose%20Estimation%20of%20Monkey&rft.btitle=2021%20China%20Automation%20Congress%20(CAC)&rft.au=Li,%20Chu-xi&rft.date=2021-10-22&rft.spage=1280&rft.epage=1284&rft.pages=1280-1284&rft.eissn=2688-0938&rft_id=info:doi/10.1109/CAC53003.2021.9727703&rft.eisbn=1665426470&rft.eisbn_list=9781665426473&rft_dat=%3Cieee_CHZPO%3E9727703%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i118t-c476da78ae1c7eae186f773fb014124a9ff7c95970672c2275bda238eb5718943%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9727703&rfr_iscdi=true