Loading…

Representative‐discriminative dictionary learning algorithm for human action recognition using smartphone sensors

SUMMARY With the advancement of mobile computing, understanding, and interpretation of human activities has become increasingly popular as an innovative human computer interaction application over the past few decades. This article presents a new scheme for action recognition based on sparse represe...

Full description

Saved in:
Bibliographic Details
Published in:Concurrency and computation 2023-01, Vol.35 (2), p.n/a
Main Authors: Rajamoney, Jansi, Ramachandran, Amutha
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3
cites cdi_FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3
container_end_page n/a
container_issue 2
container_start_page
container_title Concurrency and computation
container_volume 35
creator Rajamoney, Jansi
Ramachandran, Amutha
description SUMMARY With the advancement of mobile computing, understanding, and interpretation of human activities has become increasingly popular as an innovative human computer interaction application over the past few decades. This article presents a new scheme for action recognition based on sparse representation theory using a novel dictionary learning algorithm. This system employs two types of inertial signals from smartphones namely, accelerometer and gyroscope sensory data. Attainment of higher values of classification accuracy depends on the creation of effective dictionaries that completely retain the important features of every action while maintaining the least correlation with the features of other actions. Accordingly, in this research, we propose a new algorithm for learning dictionaries with two levels of dictionary training that aims at learning a compact, representative, and discriminative dictionary for each class. Unlike typical dictionary learning algorithms that aim at the creation of dictionaries that best represents the features of each class, our proposed algorithm incorporates a discriminative criterion that eventually produces better classification results. To validate the proposed framework, all the experiments were performed using three publicly available datasets.
doi_str_mv 10.1002/cpe.7468
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2755372831</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2755372831</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3</originalsourceid><addsrcrecordid>eNp10M1KxDAQB_AgCq6r4CMEvHjpmjRtkx5lWT9gQZG9hzSd7GZpk5p0FW8-gs_ok9jtijdPMww_Zpg_QpeUzCgh6Y3uYMazQhyhCc1ZmpCCZcd_fVqcorMYt4RQShidoPgCXYAIrle9fYPvz6_aRh1sa904wLXVvfVOhQ_cgArOujVWzdoH229abHzAm12rHFYjwwG0Xzs79ru4x7FVoe823gEezkQf4jk6MaqJcPFbp2h1t1jNH5Ll0_3j_HaZ6DRlIsmVglxkhFeQ64qKgmdVTWpTMZ7xmgihdEnzujAaSma4NkKV2lQlVMBEVrMpujqs7YJ_3UHs5dbvghsuypTnOeOpYHRQ1welg48xgJHd8P3wrqRE7hOVQ6Jyn-hAkwN9tw18_Ovk_Hkx-h8W_3y1</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2755372831</pqid></control><display><type>article</type><title>Representative‐discriminative dictionary learning algorithm for human action recognition using smartphone sensors</title><source>Wiley-Blackwell Read &amp; Publish Collection</source><creator>Rajamoney, Jansi ; Ramachandran, Amutha</creator><creatorcontrib>Rajamoney, Jansi ; Ramachandran, Amutha</creatorcontrib><description>SUMMARY With the advancement of mobile computing, understanding, and interpretation of human activities has become increasingly popular as an innovative human computer interaction application over the past few decades. This article presents a new scheme for action recognition based on sparse representation theory using a novel dictionary learning algorithm. This system employs two types of inertial signals from smartphones namely, accelerometer and gyroscope sensory data. Attainment of higher values of classification accuracy depends on the creation of effective dictionaries that completely retain the important features of every action while maintaining the least correlation with the features of other actions. Accordingly, in this research, we propose a new algorithm for learning dictionaries with two levels of dictionary training that aims at learning a compact, representative, and discriminative dictionary for each class. Unlike typical dictionary learning algorithms that aim at the creation of dictionaries that best represents the features of each class, our proposed algorithm incorporates a discriminative criterion that eventually produces better classification results. To validate the proposed framework, all the experiments were performed using three publicly available datasets.</description><identifier>ISSN: 1532-0626</identifier><identifier>EISSN: 1532-0634</identifier><identifier>DOI: 10.1002/cpe.7468</identifier><language>eng</language><publisher>Hoboken, USA: John Wiley &amp; Sons, Inc</publisher><subject>accelerometer ; Accelerometers ; action recognition ; Algorithms ; Classification ; Dictionaries ; dictionary ; gyroscope ; Human activity recognition ; Human motion ; Machine learning ; Mobile computing ; smartphone ; Smartphones</subject><ispartof>Concurrency and computation, 2023-01, Vol.35 (2), p.n/a</ispartof><rights>2022 John Wiley &amp; Sons, Ltd.</rights><rights>2023 John Wiley &amp; Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3</citedby><cites>FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3</cites><orcidid>0000-0001-9894-0006</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Rajamoney, Jansi</creatorcontrib><creatorcontrib>Ramachandran, Amutha</creatorcontrib><title>Representative‐discriminative dictionary learning algorithm for human action recognition using smartphone sensors</title><title>Concurrency and computation</title><description>SUMMARY With the advancement of mobile computing, understanding, and interpretation of human activities has become increasingly popular as an innovative human computer interaction application over the past few decades. This article presents a new scheme for action recognition based on sparse representation theory using a novel dictionary learning algorithm. This system employs two types of inertial signals from smartphones namely, accelerometer and gyroscope sensory data. Attainment of higher values of classification accuracy depends on the creation of effective dictionaries that completely retain the important features of every action while maintaining the least correlation with the features of other actions. Accordingly, in this research, we propose a new algorithm for learning dictionaries with two levels of dictionary training that aims at learning a compact, representative, and discriminative dictionary for each class. Unlike typical dictionary learning algorithms that aim at the creation of dictionaries that best represents the features of each class, our proposed algorithm incorporates a discriminative criterion that eventually produces better classification results. To validate the proposed framework, all the experiments were performed using three publicly available datasets.</description><subject>accelerometer</subject><subject>Accelerometers</subject><subject>action recognition</subject><subject>Algorithms</subject><subject>Classification</subject><subject>Dictionaries</subject><subject>dictionary</subject><subject>gyroscope</subject><subject>Human activity recognition</subject><subject>Human motion</subject><subject>Machine learning</subject><subject>Mobile computing</subject><subject>smartphone</subject><subject>Smartphones</subject><issn>1532-0626</issn><issn>1532-0634</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp10M1KxDAQB_AgCq6r4CMEvHjpmjRtkx5lWT9gQZG9hzSd7GZpk5p0FW8-gs_ok9jtijdPMww_Zpg_QpeUzCgh6Y3uYMazQhyhCc1ZmpCCZcd_fVqcorMYt4RQShidoPgCXYAIrle9fYPvz6_aRh1sa904wLXVvfVOhQ_cgArOujVWzdoH229abHzAm12rHFYjwwG0Xzs79ru4x7FVoe823gEezkQf4jk6MaqJcPFbp2h1t1jNH5Ll0_3j_HaZ6DRlIsmVglxkhFeQ64qKgmdVTWpTMZ7xmgihdEnzujAaSma4NkKV2lQlVMBEVrMpujqs7YJ_3UHs5dbvghsuypTnOeOpYHRQ1welg48xgJHd8P3wrqRE7hOVQ6Jyn-hAkwN9tw18_Ovk_Hkx-h8W_3y1</recordid><startdate>20230125</startdate><enddate>20230125</enddate><creator>Rajamoney, Jansi</creator><creator>Ramachandran, Amutha</creator><general>John Wiley &amp; Sons, Inc</general><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-9894-0006</orcidid></search><sort><creationdate>20230125</creationdate><title>Representative‐discriminative dictionary learning algorithm for human action recognition using smartphone sensors</title><author>Rajamoney, Jansi ; Ramachandran, Amutha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>accelerometer</topic><topic>Accelerometers</topic><topic>action recognition</topic><topic>Algorithms</topic><topic>Classification</topic><topic>Dictionaries</topic><topic>dictionary</topic><topic>gyroscope</topic><topic>Human activity recognition</topic><topic>Human motion</topic><topic>Machine learning</topic><topic>Mobile computing</topic><topic>smartphone</topic><topic>Smartphones</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rajamoney, Jansi</creatorcontrib><creatorcontrib>Ramachandran, Amutha</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Concurrency and computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rajamoney, Jansi</au><au>Ramachandran, Amutha</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Representative‐discriminative dictionary learning algorithm for human action recognition using smartphone sensors</atitle><jtitle>Concurrency and computation</jtitle><date>2023-01-25</date><risdate>2023</risdate><volume>35</volume><issue>2</issue><epage>n/a</epage><issn>1532-0626</issn><eissn>1532-0634</eissn><abstract>SUMMARY With the advancement of mobile computing, understanding, and interpretation of human activities has become increasingly popular as an innovative human computer interaction application over the past few decades. This article presents a new scheme for action recognition based on sparse representation theory using a novel dictionary learning algorithm. This system employs two types of inertial signals from smartphones namely, accelerometer and gyroscope sensory data. Attainment of higher values of classification accuracy depends on the creation of effective dictionaries that completely retain the important features of every action while maintaining the least correlation with the features of other actions. Accordingly, in this research, we propose a new algorithm for learning dictionaries with two levels of dictionary training that aims at learning a compact, representative, and discriminative dictionary for each class. Unlike typical dictionary learning algorithms that aim at the creation of dictionaries that best represents the features of each class, our proposed algorithm incorporates a discriminative criterion that eventually produces better classification results. To validate the proposed framework, all the experiments were performed using three publicly available datasets.</abstract><cop>Hoboken, USA</cop><pub>John Wiley &amp; Sons, Inc</pub><doi>10.1002/cpe.7468</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-9894-0006</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1532-0626
ispartof Concurrency and computation, 2023-01, Vol.35 (2), p.n/a
issn 1532-0626
1532-0634
language eng
recordid cdi_proquest_journals_2755372831
source Wiley-Blackwell Read & Publish Collection
subjects accelerometer
Accelerometers
action recognition
Algorithms
Classification
Dictionaries
dictionary
gyroscope
Human activity recognition
Human motion
Machine learning
Mobile computing
smartphone
Smartphones
title Representative‐discriminative dictionary learning algorithm for human action recognition using smartphone sensors
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T07%3A08%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Representative%E2%80%90discriminative%20dictionary%20learning%20algorithm%20for%20human%20action%20recognition%20using%20smartphone%20sensors&rft.jtitle=Concurrency%20and%20computation&rft.au=Rajamoney,%20Jansi&rft.date=2023-01-25&rft.volume=35&rft.issue=2&rft.epage=n/a&rft.issn=1532-0626&rft.eissn=1532-0634&rft_id=info:doi/10.1002/cpe.7468&rft_dat=%3Cproquest_cross%3E2755372831%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2238-5aae58407be5cb18674bd0dfb3747d088ac915d6fce93f7cf8a9cfb9ebe384d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2755372831&rft_id=info:pmid/&rfr_iscdi=true