Loading…
MonkeyPosekit: Automated Markerless 2D Pose Estimation of Monkey
Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especiall...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Video-based bone recognition is becoming a crucial tool for both clinical and neuroscientific research of fine and complicated movements. But it is very time- consuming and there is lack of accuracy to extract specific aspects of behaviors, such as hand shaking and other fine motor skills, especially for automated analysis in non-human primate studies. The OpenMonkeyStudio as a 3D toolbox is available to estimate the pose of an unmarked monkey. However, there is still a lack of the 2D method, since most of laboratories are using a single front camera to obtain 2D videos due to financial constraints. Here, we build a bone- recognition auxiliary tool called MonkeyPosekit, which is based on deep learning for automatically capturing the stream information from 2D videos without the need for external hardware assistance. The MonkeyPosekit is able to identify the monkey's activity space to track 13 bone joint points for the behavioral testing. Futhermore, we propose a novel approach of data augmentation called CageAUG to overcome the occlusion issues in this study. Equipped with the CageAUG augmentation, the accuracy has reached 98.8% in Open Monkey dataset by using the High-Resolution network. |
---|---|
ISSN: | 2688-0938 |
DOI: | 10.1109/CAC53003.2021.9727703 |