Loading…
A 2D Convolutional Neural Network Approach for Human Action Recognition
Nowadays, deep neural networks are widely used for human action recognition (HAR) due to their ability to operate directly on the raw video inputs by extracting both the spatial and temporal information. Although the 3D convolutional neural networks as deep models have achieved superior performance,...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 5 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Toudjeu, Ignace Tchangou Tapamo, Jules-Raymond |
description | Nowadays, deep neural networks are widely used for human action recognition (HAR) due to their ability to operate directly on the raw video inputs by extracting both the spatial and temporal information. Although the 3D convolutional neural networks as deep models have achieved superior performance, they remain computational expensive. In this paper we propose a 2D-CNN approach that learns robust feature representation from temporal information embedded into the motion history images of action videos. The proposed approach is simple and reduces the computational complexity imposed by the 3D-CNN approaches. The KTH database is used to validate our approach and the achieved results are compared favorably against the handcrafted state-of-the-art methods. |
doi_str_mv | 10.1109/AFRICON46755.2019.9133840 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9133840</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9133840</ieee_id><sourcerecordid>9133840</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-a51cb82074e1d5be99354bf30aeeba8118bc7dac5363a2be49043c70d3afc6bc3</originalsourceid><addsrcrecordid>eNotj81KAzEURqMgWGufwE18gKn35iadZDmM9gdKC0XXJUkzOtpOhsxU8e1LtavzLQ4fHMYeEcaIYJ6K6WZRrldykis1FoBmbJBIS7hid5gLjSS0kddsIFBRBkB0y0Zd9wkACFoRqQGbFVw88zI233F_7OvY2D1fhWP6Q_8T0xcv2jZF6z94FROfHw-24YU_q3wTfHxv6vO-ZzeV3XdhdOGQvU1fXst5tlzPFmWxzGoB1GdWoXdaQC4D7pQLxpCSriKwITirEbXz-c56RROywgVpQJLPYUe28hPnacge_n_rEMK2TfXBpt_tpZtO3IhN9g</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A 2D Convolutional Neural Network Approach for Human Action Recognition</title><source>IEEE Xplore All Conference Series</source><creator>Toudjeu, Ignace Tchangou ; Tapamo, Jules-Raymond</creator><creatorcontrib>Toudjeu, Ignace Tchangou ; Tapamo, Jules-Raymond</creatorcontrib><description>Nowadays, deep neural networks are widely used for human action recognition (HAR) due to their ability to operate directly on the raw video inputs by extracting both the spatial and temporal information. Although the 3D convolutional neural networks as deep models have achieved superior performance, they remain computational expensive. In this paper we propose a 2D-CNN approach that learns robust feature representation from temporal information embedded into the motion history images of action videos. The proposed approach is simple and reduces the computational complexity imposed by the 3D-CNN approaches. The KTH database is used to validate our approach and the achieved results are compared favorably against the handcrafted state-of-the-art methods.</description><identifier>EISSN: 2153-0033</identifier><identifier>EISBN: 1728132894</identifier><identifier>EISBN: 9781728132891</identifier><identifier>DOI: 10.1109/AFRICON46755.2019.9133840</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Convolution ; convolutional neural network ; Deep learning ; Feature extraction ; History ; human action recognition ; Kernel ; motion history image ; Two dimensional displays</subject><ispartof>2019 IEEE AFRICON, 2019, p.1-5</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9133840$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,23909,23910,25118,27902,54530,54907</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9133840$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Toudjeu, Ignace Tchangou</creatorcontrib><creatorcontrib>Tapamo, Jules-Raymond</creatorcontrib><title>A 2D Convolutional Neural Network Approach for Human Action Recognition</title><title>2019 IEEE AFRICON</title><addtitle>AFRICON</addtitle><description>Nowadays, deep neural networks are widely used for human action recognition (HAR) due to their ability to operate directly on the raw video inputs by extracting both the spatial and temporal information. Although the 3D convolutional neural networks as deep models have achieved superior performance, they remain computational expensive. In this paper we propose a 2D-CNN approach that learns robust feature representation from temporal information embedded into the motion history images of action videos. The proposed approach is simple and reduces the computational complexity imposed by the 3D-CNN approaches. The KTH database is used to validate our approach and the achieved results are compared favorably against the handcrafted state-of-the-art methods.</description><subject>Computational modeling</subject><subject>Convolution</subject><subject>convolutional neural network</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>History</subject><subject>human action recognition</subject><subject>Kernel</subject><subject>motion history image</subject><subject>Two dimensional displays</subject><issn>2153-0033</issn><isbn>1728132894</isbn><isbn>9781728132891</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj81KAzEURqMgWGufwE18gKn35iadZDmM9gdKC0XXJUkzOtpOhsxU8e1LtavzLQ4fHMYeEcaIYJ6K6WZRrldykis1FoBmbJBIS7hid5gLjSS0kddsIFBRBkB0y0Zd9wkACFoRqQGbFVw88zI233F_7OvY2D1fhWP6Q_8T0xcv2jZF6z94FROfHw-24YU_q3wTfHxv6vO-ZzeV3XdhdOGQvU1fXst5tlzPFmWxzGoB1GdWoXdaQC4D7pQLxpCSriKwITirEbXz-c56RROywgVpQJLPYUe28hPnacge_n_rEMK2TfXBpt_tpZtO3IhN9g</recordid><startdate>201909</startdate><enddate>201909</enddate><creator>Toudjeu, Ignace Tchangou</creator><creator>Tapamo, Jules-Raymond</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201909</creationdate><title>A 2D Convolutional Neural Network Approach for Human Action Recognition</title><author>Toudjeu, Ignace Tchangou ; Tapamo, Jules-Raymond</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-a51cb82074e1d5be99354bf30aeeba8118bc7dac5363a2be49043c70d3afc6bc3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computational modeling</topic><topic>Convolution</topic><topic>convolutional neural network</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>History</topic><topic>human action recognition</topic><topic>Kernel</topic><topic>motion history image</topic><topic>Two dimensional displays</topic><toplevel>online_resources</toplevel><creatorcontrib>Toudjeu, Ignace Tchangou</creatorcontrib><creatorcontrib>Tapamo, Jules-Raymond</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Toudjeu, Ignace Tchangou</au><au>Tapamo, Jules-Raymond</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A 2D Convolutional Neural Network Approach for Human Action Recognition</atitle><btitle>2019 IEEE AFRICON</btitle><stitle>AFRICON</stitle><date>2019-09</date><risdate>2019</risdate><spage>1</spage><epage>5</epage><pages>1-5</pages><eissn>2153-0033</eissn><eisbn>1728132894</eisbn><eisbn>9781728132891</eisbn><abstract>Nowadays, deep neural networks are widely used for human action recognition (HAR) due to their ability to operate directly on the raw video inputs by extracting both the spatial and temporal information. Although the 3D convolutional neural networks as deep models have achieved superior performance, they remain computational expensive. In this paper we propose a 2D-CNN approach that learns robust feature representation from temporal information embedded into the motion history images of action videos. The proposed approach is simple and reduces the computational complexity imposed by the 3D-CNN approaches. The KTH database is used to validate our approach and the achieved results are compared favorably against the handcrafted state-of-the-art methods.</abstract><pub>IEEE</pub><doi>10.1109/AFRICON46755.2019.9133840</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2153-0033 |
ispartof | 2019 IEEE AFRICON, 2019, p.1-5 |
issn | 2153-0033 |
language | eng |
recordid | cdi_ieee_primary_9133840 |
source | IEEE Xplore All Conference Series |
subjects | Computational modeling Convolution convolutional neural network Deep learning Feature extraction History human action recognition Kernel motion history image Two dimensional displays |
title | A 2D Convolutional Neural Network Approach for Human Action Recognition |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T07%3A47%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%202D%20Convolutional%20Neural%20Network%20Approach%20for%20Human%20Action%20Recognition&rft.btitle=2019%20IEEE%20AFRICON&rft.au=Toudjeu,%20Ignace%20Tchangou&rft.date=2019-09&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.eissn=2153-0033&rft_id=info:doi/10.1109/AFRICON46755.2019.9133840&rft.eisbn=1728132894&rft.eisbn_list=9781728132891&rft_dat=%3Cieee_CHZPO%3E9133840%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-a51cb82074e1d5be99354bf30aeeba8118bc7dac5363a2be49043c70d3afc6bc3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9133840&rfr_iscdi=true |