Loading…
An Interpretable Deep Learning-Based Feature Reduction in Video-Based Human Activity Recognition
This paper presents a human activity recognition framework tailored for healthcare applications, emphasizing the essential balance between accuracy and interpretability required for medical monitoring. The model utilizes MediaPipe to capture the complex dynamics of human movements and introduce an i...
Saved in:
Published in: | IEEE access 2024, Vol.12, p.187947-187963 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents a human activity recognition framework tailored for healthcare applications, emphasizing the essential balance between accuracy and interpretability required for medical monitoring. The model utilizes MediaPipe to capture the complex dynamics of human movements and introduce an interpretable feature reduction function. This method improves traditional dimensionality reduction techniques like principal component analysis. Our feature engineering is based on the importance of feature permutations; it selectively retains salient features, thus enhancing the interpretability essential for the medical domain. We validated our method on the "NTU RGB+D" dataset; it improves the recognition accuracy for a range of human activities that may be relevant for elderly care. However, the recognition of subtler activities like neck pain and headaches requires further investigation. This study underscores the potential to advance patient monitoring and sets the stage for its expanded application in various medical contexts. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3432776 |