Loading…

Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition

With the advances of sensing technology and deep learning, deep learning based human activity recognition from sensor signal data has been actively studied. While deep neural networks can automatically extract features appropriate for the target task and focus on increasing the recognition performan...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on industrial informatics 2020-11, Vol.16 (11), p.7190-7198
Main Author: Kim, Eunji
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473
cites cdi_FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473
container_end_page 7198
container_issue 11
container_start_page 7190
container_title IEEE transactions on industrial informatics
container_volume 16
creator Kim, Eunji
description With the advances of sensing technology and deep learning, deep learning based human activity recognition from sensor signal data has been actively studied. While deep neural networks can automatically extract features appropriate for the target task and focus on increasing the recognition performance, they cannot select important input sensor signals, which leads to the lack of interpretability. Since not all signals from wearable sensors are important for the target task, sensor signal importance will be insightful information for practitioners. In this article, we propose an interpretable and accurate convolutional neural network capable of select important sensor signals. This is enabled by spatially sparse convolutional filters whose sparsity is imposed by spatial group lasso. While there is a tradeoff between accuracy and interpretability in a model, experimental results on the opportunity activity recognition dataset show that the proposed model can help improve recognition performance and select important sensor signals, providing interpretability.
doi_str_mv 10.1109/TII.2020.2972628
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2429902372</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8990002</ieee_id><sourcerecordid>2429902372</sourcerecordid><originalsourceid>FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473</originalsourceid><addsrcrecordid>eNo9kMFLwzAUh4MoOKd3wUvAc-dL0izNcQx1BVGQiceQpql0ds1M0sn-e1M2PL3H4_t-PH4I3RKYEQLyYV2WMwoUZlQKOqfFGZoQmZMMgMN52jknGaPALtFVCBsAJoDJCfos-2j9ztuoq85i3dd4YczgdbR46fq964bYul53-NWm6zjir_PfATfO49Ww1X0SYrtv4wG_W-O--nYUrtFFo7tgb05zij6eHtfLVfby9lwuFy-ZYayIGSUNrSpumKmrpgZDWEEJY3JORCU4E0waQWoqtSkIzzVreGUshwTSWphcsCm6P-buvPsZbIhq4waf_g2K5lRKoEzQRMGRMt6F4G2jdr7dan9QBNRYn0r1qbE-daovKXdHpbXW_uNFSoSU-QciEGus</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2429902372</pqid></control><display><type>article</type><title>Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Kim, Eunji</creator><creatorcontrib>Kim, Eunji</creatorcontrib><description>With the advances of sensing technology and deep learning, deep learning based human activity recognition from sensor signal data has been actively studied. While deep neural networks can automatically extract features appropriate for the target task and focus on increasing the recognition performance, they cannot select important input sensor signals, which leads to the lack of interpretability. Since not all signals from wearable sensors are important for the target task, sensor signal importance will be insightful information for practitioners. In this article, we propose an interpretable and accurate convolutional neural network capable of select important sensor signals. This is enabled by spatially sparse convolutional filters whose sparsity is imposed by spatial group lasso. While there is a tradeoff between accuracy and interpretability in a model, experimental results on the opportunity activity recognition dataset show that the proposed model can help improve recognition performance and select important sensor signals, providing interpretability.</description><identifier>ISSN: 1551-3203</identifier><identifier>EISSN: 1941-0050</identifier><identifier>DOI: 10.1109/TII.2020.2972628</identifier><identifier>CODEN: ITIICH</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Activity recognition ; Artificial neural networks ; Convolution ; Convolutional neural networks (CNNs) ; Deep learning ; Feature extraction ; feature selection ; Human activity recognition ; human activity recognition (HAR) ; interpretability ; Kernel ; Machine learning ; Model accuracy ; Moving object recognition ; Neural networks ; regularization ; sensor signal selection ; Sensors ; spatial group lasso (GL) ; Task analysis ; wearable sensors</subject><ispartof>IEEE transactions on industrial informatics, 2020-11, Vol.16 (11), p.7190-7198</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473</citedby><cites>FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473</cites><orcidid>0000-0003-4857-521X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8990002$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Kim, Eunji</creatorcontrib><title>Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition</title><title>IEEE transactions on industrial informatics</title><addtitle>TII</addtitle><description>With the advances of sensing technology and deep learning, deep learning based human activity recognition from sensor signal data has been actively studied. While deep neural networks can automatically extract features appropriate for the target task and focus on increasing the recognition performance, they cannot select important input sensor signals, which leads to the lack of interpretability. Since not all signals from wearable sensors are important for the target task, sensor signal importance will be insightful information for practitioners. In this article, we propose an interpretable and accurate convolutional neural network capable of select important sensor signals. This is enabled by spatially sparse convolutional filters whose sparsity is imposed by spatial group lasso. While there is a tradeoff between accuracy and interpretability in a model, experimental results on the opportunity activity recognition dataset show that the proposed model can help improve recognition performance and select important sensor signals, providing interpretability.</description><subject>Activity recognition</subject><subject>Artificial neural networks</subject><subject>Convolution</subject><subject>Convolutional neural networks (CNNs)</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>feature selection</subject><subject>Human activity recognition</subject><subject>human activity recognition (HAR)</subject><subject>interpretability</subject><subject>Kernel</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Moving object recognition</subject><subject>Neural networks</subject><subject>regularization</subject><subject>sensor signal selection</subject><subject>Sensors</subject><subject>spatial group lasso (GL)</subject><subject>Task analysis</subject><subject>wearable sensors</subject><issn>1551-3203</issn><issn>1941-0050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNo9kMFLwzAUh4MoOKd3wUvAc-dL0izNcQx1BVGQiceQpql0ds1M0sn-e1M2PL3H4_t-PH4I3RKYEQLyYV2WMwoUZlQKOqfFGZoQmZMMgMN52jknGaPALtFVCBsAJoDJCfos-2j9ztuoq85i3dd4YczgdbR46fq964bYul53-NWm6zjir_PfATfO49Ww1X0SYrtv4wG_W-O--nYUrtFFo7tgb05zij6eHtfLVfby9lwuFy-ZYayIGSUNrSpumKmrpgZDWEEJY3JORCU4E0waQWoqtSkIzzVreGUshwTSWphcsCm6P-buvPsZbIhq4waf_g2K5lRKoEzQRMGRMt6F4G2jdr7dan9QBNRYn0r1qbE-daovKXdHpbXW_uNFSoSU-QciEGus</recordid><startdate>20201101</startdate><enddate>20201101</enddate><creator>Kim, Eunji</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-4857-521X</orcidid></search><sort><creationdate>20201101</creationdate><title>Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition</title><author>Kim, Eunji</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Activity recognition</topic><topic>Artificial neural networks</topic><topic>Convolution</topic><topic>Convolutional neural networks (CNNs)</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>feature selection</topic><topic>Human activity recognition</topic><topic>human activity recognition (HAR)</topic><topic>interpretability</topic><topic>Kernel</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Moving object recognition</topic><topic>Neural networks</topic><topic>regularization</topic><topic>sensor signal selection</topic><topic>Sensors</topic><topic>spatial group lasso (GL)</topic><topic>Task analysis</topic><topic>wearable sensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Eunji</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on industrial informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Eunji</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition</atitle><jtitle>IEEE transactions on industrial informatics</jtitle><stitle>TII</stitle><date>2020-11-01</date><risdate>2020</risdate><volume>16</volume><issue>11</issue><spage>7190</spage><epage>7198</epage><pages>7190-7198</pages><issn>1551-3203</issn><eissn>1941-0050</eissn><coden>ITIICH</coden><abstract>With the advances of sensing technology and deep learning, deep learning based human activity recognition from sensor signal data has been actively studied. While deep neural networks can automatically extract features appropriate for the target task and focus on increasing the recognition performance, they cannot select important input sensor signals, which leads to the lack of interpretability. Since not all signals from wearable sensors are important for the target task, sensor signal importance will be insightful information for practitioners. In this article, we propose an interpretable and accurate convolutional neural network capable of select important sensor signals. This is enabled by spatially sparse convolutional filters whose sparsity is imposed by spatial group lasso. While there is a tradeoff between accuracy and interpretability in a model, experimental results on the opportunity activity recognition dataset show that the proposed model can help improve recognition performance and select important sensor signals, providing interpretability.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TII.2020.2972628</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-4857-521X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1551-3203
ispartof IEEE transactions on industrial informatics, 2020-11, Vol.16 (11), p.7190-7198
issn 1551-3203
1941-0050
language eng
recordid cdi_proquest_journals_2429902372
source IEEE Electronic Library (IEL) Journals
subjects Activity recognition
Artificial neural networks
Convolution
Convolutional neural networks (CNNs)
Deep learning
Feature extraction
feature selection
Human activity recognition
human activity recognition (HAR)
interpretability
Kernel
Machine learning
Model accuracy
Moving object recognition
Neural networks
regularization
sensor signal selection
Sensors
spatial group lasso (GL)
Task analysis
wearable sensors
title Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T18%3A59%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interpretable%20and%20Accurate%20Convolutional%20Neural%20Networks%20for%20Human%20Activity%20Recognition&rft.jtitle=IEEE%20transactions%20on%20industrial%20informatics&rft.au=Kim,%20Eunji&rft.date=2020-11-01&rft.volume=16&rft.issue=11&rft.spage=7190&rft.epage=7198&rft.pages=7190-7198&rft.issn=1551-3203&rft.eissn=1941-0050&rft.coden=ITIICH&rft_id=info:doi/10.1109/TII.2020.2972628&rft_dat=%3Cproquest_cross%3E2429902372%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c338t-21f2bb5c3cdbfd0c13821339617b753739c71d29ac8154a3f5bce500c12d7c473%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2429902372&rft_id=info:pmid/&rft_ieee_id=8990002&rfr_iscdi=true