Loading…
Human Activity Recognition Based on Acceleration Data From Smartphones Using HMMs
Smartphones are among the most popular wearable devices to monitor human activities. Several existing methods for Human Activity Recognition (HAR) using data from smartphones are based on conventional pattern recognition techniques, but they generate handcrafted feature vectors. This drawback is ove...
Saved in:
Published in: | IEEE access 2021, Vol.9, p.139336-139351 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Smartphones are among the most popular wearable devices to monitor human activities. Several existing methods for Human Activity Recognition (HAR) using data from smartphones are based on conventional pattern recognition techniques, but they generate handcrafted feature vectors. This drawback is overcome by deep learning techniques which unfortunately require lots of computing resources, while generating less interpretable feature vectors. The current paper addresses these limitations through the proposal of a Hidden Markov Model (HMM)-based technique for HAR. More formally, the sequential variations of spatial locations within the raw data vectors are initially captured in Markov chains, which are later used for the initialization and the training of HMMs. Meta-data extracted from these models are then saved as the components of the feature vectors. The meta-data are related to the overall time spent by the model observing every symbol for a long time span, irrespective of the state from which this symbol is observed. Classification experiments involving four classification tasks have been carried out on the recently constructed UniMiB SHAR database which contains 17 classes, including 9 types of activities of daily living and 8 types of falls. As a result, the proposed approach has shown best accuracies between 92% and 98.85% for all the classification tasks. This performance is more than 10% better than prior work for 2 out of 4 classification tasks. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2021.3117336 |