Loading…
Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning
Human activity recognition (HAR) remains a challenging yet crucial problem to address in computer vision. HAR is primarily intended to be used with other technologies, such as the Internet of Things, to assist in healthcare and eldercare. With the development of deep learning, automatic high-level f...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2021-03, Vol.21 (6), p.2141 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3 |
---|---|
cites | cdi_FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3 |
container_end_page | |
container_issue | 6 |
container_start_page | 2141 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 21 |
creator | Nafea, Ohoud Abdul, Wadood Muhammad, Ghulam Alsulaiman, Mansour |
description | Human activity recognition (HAR) remains a challenging yet crucial problem to address in computer vision. HAR is primarily intended to be used with other technologies, such as the Internet of Things, to assist in healthcare and eldercare. With the development of deep learning, automatic high-level feature extraction has become a possibility and has been used to optimize HAR performance. Furthermore, deep-learning techniques have been applied in various fields for sensor-based HAR. This study introduces a new methodology using convolution neural networks (CNN) with varying kernel dimensions along with bi-directional long short-term memory (BiLSTM) to capture features at various resolutions. The novelty of this research lies in the effective selection of the optimal video representation and in the effective extraction of spatial and temporal features from sensor data using traditional CNN and BiLSTM. Wireless sensor data mining (WISDM) and UCI datasets are used for this proposed methodology in which data are collected through diverse methods, including accelerometers, sensors, and gyroscopes. The results indicate that the proposed scheme is efficient in improving HAR. It was thus found that unlike other available methods, the proposed method improved accuracy, attaining a higher score in the WISDM dataset compared to the UCI dataset (98.53% vs. 97.05%). |
doi_str_mv | 10.3390/s21062141 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_95e5866a040246f483dfae967cbf6c2d</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_95e5866a040246f483dfae967cbf6c2d</doaj_id><sourcerecordid>2508569808</sourcerecordid><originalsourceid>FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3</originalsourceid><addsrcrecordid>eNpVkU1P3DAQhq2qVaG0h_6BKsf2EBh_xrlUotAC1UqVCj1bjj1ZjBI7tbMg_n1TFlZw8XhmXj0zo5eQjxQOOW_hqDAKilFBX5F9KpioNWPw-tl_j7wr5QaAcc71W7K3vMB1S_fJz0uMJeX6my3oq_PNaGN17OZwG-b76je6tI5hDilWd2G-ri4nuyT1FY5TynaoThGnaoU2xxDX78mb3g4FPzzGA_Lnx_erk_N69evs4uR4VTsh6Fw3fc9QSui0bDiKxntQohEOeqYBEWznOi6ttk57lJ1Q1gMXXii0Elrm-QG52HJ9sjdmymG0-d4kG8xDIeW1sXkObkDTSpRaKQsCmFC90Nz3FlvVuK5X7oH1dcuaNt2I3mGcl7teQF92Yrg263RrNACnulkAnx8BOf3dYJnNGIrDYbAR06YYJkFL1WrQi_TLVupyKiVjvxtDwfz30ex8XLSfnu-1Uz4Zx_8Bca-YhQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2508569808</pqid></control><display><type>article</type><title>Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning</title><source>PubMed Central(OpenAccess)</source><source>ProQuest - Publicly Available Content Database</source><creator>Nafea, Ohoud ; Abdul, Wadood ; Muhammad, Ghulam ; Alsulaiman, Mansour</creator><creatorcontrib>Nafea, Ohoud ; Abdul, Wadood ; Muhammad, Ghulam ; Alsulaiman, Mansour</creatorcontrib><description>Human activity recognition (HAR) remains a challenging yet crucial problem to address in computer vision. HAR is primarily intended to be used with other technologies, such as the Internet of Things, to assist in healthcare and eldercare. With the development of deep learning, automatic high-level feature extraction has become a possibility and has been used to optimize HAR performance. Furthermore, deep-learning techniques have been applied in various fields for sensor-based HAR. This study introduces a new methodology using convolution neural networks (CNN) with varying kernel dimensions along with bi-directional long short-term memory (BiLSTM) to capture features at various resolutions. The novelty of this research lies in the effective selection of the optimal video representation and in the effective extraction of spatial and temporal features from sensor data using traditional CNN and BiLSTM. Wireless sensor data mining (WISDM) and UCI datasets are used for this proposed methodology in which data are collected through diverse methods, including accelerometers, sensors, and gyroscopes. The results indicate that the proposed scheme is efficient in improving HAR. It was thus found that unlike other available methods, the proposed method improved accuracy, attaining a higher score in the WISDM dataset compared to the UCI dataset (98.53% vs. 97.05%).</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s21062141</identifier><identifier>PMID: 33803891</identifier><language>eng</language><publisher>Switzerland: MDPI</publisher><subject>Bi-directional LSTM ; convolution neural networks ; Data Mining ; Deep Learning ; Human Activities ; human activity recognition ; Humans ; local spatio-temporal features ; Memory, Long-Term ; Neural Networks, Computer</subject><ispartof>Sensors (Basel, Switzerland), 2021-03, Vol.21 (6), p.2141</ispartof><rights>2021 by the authors. 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3</citedby><cites>FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3</cites><orcidid>0000-0002-6871-6633 ; 0000-0002-9781-3969</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8003187/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8003187/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,27924,27925,37013,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33803891$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Nafea, Ohoud</creatorcontrib><creatorcontrib>Abdul, Wadood</creatorcontrib><creatorcontrib>Muhammad, Ghulam</creatorcontrib><creatorcontrib>Alsulaiman, Mansour</creatorcontrib><title>Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Human activity recognition (HAR) remains a challenging yet crucial problem to address in computer vision. HAR is primarily intended to be used with other technologies, such as the Internet of Things, to assist in healthcare and eldercare. With the development of deep learning, automatic high-level feature extraction has become a possibility and has been used to optimize HAR performance. Furthermore, deep-learning techniques have been applied in various fields for sensor-based HAR. This study introduces a new methodology using convolution neural networks (CNN) with varying kernel dimensions along with bi-directional long short-term memory (BiLSTM) to capture features at various resolutions. The novelty of this research lies in the effective selection of the optimal video representation and in the effective extraction of spatial and temporal features from sensor data using traditional CNN and BiLSTM. Wireless sensor data mining (WISDM) and UCI datasets are used for this proposed methodology in which data are collected through diverse methods, including accelerometers, sensors, and gyroscopes. The results indicate that the proposed scheme is efficient in improving HAR. It was thus found that unlike other available methods, the proposed method improved accuracy, attaining a higher score in the WISDM dataset compared to the UCI dataset (98.53% vs. 97.05%).</description><subject>Bi-directional LSTM</subject><subject>convolution neural networks</subject><subject>Data Mining</subject><subject>Deep Learning</subject><subject>Human Activities</subject><subject>human activity recognition</subject><subject>Humans</subject><subject>local spatio-temporal features</subject><subject>Memory, Long-Term</subject><subject>Neural Networks, Computer</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpVkU1P3DAQhq2qVaG0h_6BKsf2EBh_xrlUotAC1UqVCj1bjj1ZjBI7tbMg_n1TFlZw8XhmXj0zo5eQjxQOOW_hqDAKilFBX5F9KpioNWPw-tl_j7wr5QaAcc71W7K3vMB1S_fJz0uMJeX6my3oq_PNaGN17OZwG-b76je6tI5hDilWd2G-ri4nuyT1FY5TynaoThGnaoU2xxDX78mb3g4FPzzGA_Lnx_erk_N69evs4uR4VTsh6Fw3fc9QSui0bDiKxntQohEOeqYBEWznOi6ttk57lJ1Q1gMXXii0Elrm-QG52HJ9sjdmymG0-d4kG8xDIeW1sXkObkDTSpRaKQsCmFC90Nz3FlvVuK5X7oH1dcuaNt2I3mGcl7teQF92Yrg263RrNACnulkAnx8BOf3dYJnNGIrDYbAR06YYJkFL1WrQi_TLVupyKiVjvxtDwfz30ex8XLSfnu-1Uz4Zx_8Bca-YhQ</recordid><startdate>20210318</startdate><enddate>20210318</enddate><creator>Nafea, Ohoud</creator><creator>Abdul, Wadood</creator><creator>Muhammad, Ghulam</creator><creator>Alsulaiman, Mansour</creator><general>MDPI</general><general>MDPI AG</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-6871-6633</orcidid><orcidid>https://orcid.org/0000-0002-9781-3969</orcidid></search><sort><creationdate>20210318</creationdate><title>Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning</title><author>Nafea, Ohoud ; Abdul, Wadood ; Muhammad, Ghulam ; Alsulaiman, Mansour</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Bi-directional LSTM</topic><topic>convolution neural networks</topic><topic>Data Mining</topic><topic>Deep Learning</topic><topic>Human Activities</topic><topic>human activity recognition</topic><topic>Humans</topic><topic>local spatio-temporal features</topic><topic>Memory, Long-Term</topic><topic>Neural Networks, Computer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nafea, Ohoud</creatorcontrib><creatorcontrib>Abdul, Wadood</creatorcontrib><creatorcontrib>Muhammad, Ghulam</creatorcontrib><creatorcontrib>Alsulaiman, Mansour</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nafea, Ohoud</au><au>Abdul, Wadood</au><au>Muhammad, Ghulam</au><au>Alsulaiman, Mansour</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2021-03-18</date><risdate>2021</risdate><volume>21</volume><issue>6</issue><spage>2141</spage><pages>2141-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Human activity recognition (HAR) remains a challenging yet crucial problem to address in computer vision. HAR is primarily intended to be used with other technologies, such as the Internet of Things, to assist in healthcare and eldercare. With the development of deep learning, automatic high-level feature extraction has become a possibility and has been used to optimize HAR performance. Furthermore, deep-learning techniques have been applied in various fields for sensor-based HAR. This study introduces a new methodology using convolution neural networks (CNN) with varying kernel dimensions along with bi-directional long short-term memory (BiLSTM) to capture features at various resolutions. The novelty of this research lies in the effective selection of the optimal video representation and in the effective extraction of spatial and temporal features from sensor data using traditional CNN and BiLSTM. Wireless sensor data mining (WISDM) and UCI datasets are used for this proposed methodology in which data are collected through diverse methods, including accelerometers, sensors, and gyroscopes. The results indicate that the proposed scheme is efficient in improving HAR. It was thus found that unlike other available methods, the proposed method improved accuracy, attaining a higher score in the WISDM dataset compared to the UCI dataset (98.53% vs. 97.05%).</abstract><cop>Switzerland</cop><pub>MDPI</pub><pmid>33803891</pmid><doi>10.3390/s21062141</doi><orcidid>https://orcid.org/0000-0002-6871-6633</orcidid><orcidid>https://orcid.org/0000-0002-9781-3969</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2021-03, Vol.21 (6), p.2141 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_95e5866a040246f483dfae967cbf6c2d |
source | PubMed Central(OpenAccess); ProQuest - Publicly Available Content Database |
subjects | Bi-directional LSTM convolution neural networks Data Mining Deep Learning Human Activities human activity recognition Humans local spatio-temporal features Memory, Long-Term Neural Networks, Computer |
title | Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T13%3A48%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sensor-Based%20Human%20Activity%20Recognition%20with%20Spatio-Temporal%20Deep%20Learning&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Nafea,%20Ohoud&rft.date=2021-03-18&rft.volume=21&rft.issue=6&rft.spage=2141&rft.pages=2141-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s21062141&rft_dat=%3Cproquest_doaj_%3E2508569808%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c441t-7ff2e550b8573e47dd06474c0f280ee0abcb35a8ac8de5b46ad034d46ea5092d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2508569808&rft_id=info:pmid/33803891&rfr_iscdi=true |