Loading…
On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition
In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference...
Saved in:
Published in: | IEEE transactions on circuits and systems for video technology 2019-09, Vol.29 (9), p.2708-2719 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583 |
---|---|
cites | cdi_FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583 |
container_end_page | 2719 |
container_issue | 9 |
container_start_page | 2708 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 29 |
creator | Takemura, Noriko Makihara, Yasushi Muramatsu, Daigo Echigo, Tomio Yagi, Yasushi |
description | In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences. |
doi_str_mv | 10.1109/TCSVT.2017.2760835 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8063344</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8063344</ieee_id><sourcerecordid>2285339086</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0EEqXwA7CxxDqtPY4de1kiKJUqKkHp1qSOAyklLrZDxd-TPsTqzuKe0dVB6JqSAaVEDef5y2I-AEKzAWSCSMZPUI9yLhMAwk-7m3CaSKD8HF2EsCKEpjLNeuht1uBJs2njcNbGLvDIm486WhNbbwOunMe5a37cuo21a4o1frKt30fcOv-Z3BXBljj3LoRkUdstHhd1xM_WuPem3iGX6Kwq1sFeHbOPXh_u5_ljMp2NJ_lomhgmaExMxgoOQjIDhSrVEthSVLQqiVgyolKQoCpFoSytShXhQGWmrKqESUsJgkvWR7eHvxvvvlsbol651neLgwaQnDFFpOhacGiZ3WJvK73x9VfhfzUlemdS703qnUl9NNlBNweottb-A5IIxtKU_QHwSW-A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2285339086</pqid></control><display><type>article</type><title>On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Takemura, Noriko ; Makihara, Yasushi ; Muramatsu, Daigo ; Echigo, Tomio ; Yagi, Yasushi</creator><creatorcontrib>Takemura, Noriko ; Makihara, Yasushi ; Muramatsu, Daigo ; Echigo, Tomio ; Yagi, Yasushi</creatorcontrib><description>In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2017.2760835</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Convolution ; Convolutional neural network ; cross-view ; Displacement ; Gait recognition ; Matching ; Mathematical analysis ; Network architecture ; Neural networks ; Performance evaluation ; Probes ; Robustness</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2019-09, Vol.29 (9), p.2708-2719</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583</citedby><cites>FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583</cites><orcidid>0000-0003-1977-4690</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8063344$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Takemura, Noriko</creatorcontrib><creatorcontrib>Makihara, Yasushi</creatorcontrib><creatorcontrib>Muramatsu, Daigo</creatorcontrib><creatorcontrib>Echigo, Tomio</creatorcontrib><creatorcontrib>Yagi, Yasushi</creatorcontrib><title>On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences.</description><subject>Artificial neural networks</subject><subject>Convolution</subject><subject>Convolutional neural network</subject><subject>cross-view</subject><subject>Displacement</subject><subject>Gait recognition</subject><subject>Matching</subject><subject>Mathematical analysis</subject><subject>Network architecture</subject><subject>Neural networks</subject><subject>Performance evaluation</subject><subject>Probes</subject><subject>Robustness</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNo9kMtOwzAQRS0EEqXwA7CxxDqtPY4de1kiKJUqKkHp1qSOAyklLrZDxd-TPsTqzuKe0dVB6JqSAaVEDef5y2I-AEKzAWSCSMZPUI9yLhMAwk-7m3CaSKD8HF2EsCKEpjLNeuht1uBJs2njcNbGLvDIm486WhNbbwOunMe5a37cuo21a4o1frKt30fcOv-Z3BXBljj3LoRkUdstHhd1xM_WuPem3iGX6Kwq1sFeHbOPXh_u5_ljMp2NJ_lomhgmaExMxgoOQjIDhSrVEthSVLQqiVgyolKQoCpFoSytShXhQGWmrKqESUsJgkvWR7eHvxvvvlsbol651neLgwaQnDFFpOhacGiZ3WJvK73x9VfhfzUlemdS703qnUl9NNlBNweottb-A5IIxtKU_QHwSW-A</recordid><startdate>20190901</startdate><enddate>20190901</enddate><creator>Takemura, Noriko</creator><creator>Makihara, Yasushi</creator><creator>Muramatsu, Daigo</creator><creator>Echigo, Tomio</creator><creator>Yagi, Yasushi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1977-4690</orcidid></search><sort><creationdate>20190901</creationdate><title>On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition</title><author>Takemura, Noriko ; Makihara, Yasushi ; Muramatsu, Daigo ; Echigo, Tomio ; Yagi, Yasushi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Convolution</topic><topic>Convolutional neural network</topic><topic>cross-view</topic><topic>Displacement</topic><topic>Gait recognition</topic><topic>Matching</topic><topic>Mathematical analysis</topic><topic>Network architecture</topic><topic>Neural networks</topic><topic>Performance evaluation</topic><topic>Probes</topic><topic>Robustness</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Takemura, Noriko</creatorcontrib><creatorcontrib>Makihara, Yasushi</creatorcontrib><creatorcontrib>Muramatsu, Daigo</creatorcontrib><creatorcontrib>Echigo, Tomio</creatorcontrib><creatorcontrib>Yagi, Yasushi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Takemura, Noriko</au><au>Makihara, Yasushi</au><au>Muramatsu, Daigo</au><au>Echigo, Tomio</au><au>Yagi, Yasushi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2019-09-01</date><risdate>2019</risdate><volume>29</volume><issue>9</issue><spage>2708</spage><epage>2719</epage><pages>2708-2719</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2017.2760835</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-1977-4690</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2019-09, Vol.29 (9), p.2708-2719 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_ieee_primary_8063344 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Artificial neural networks Convolution Convolutional neural network cross-view Displacement Gait recognition Matching Mathematical analysis Network architecture Neural networks Performance evaluation Probes Robustness |
title | On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T04%3A55%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20Input/Output%20Architectures%20for%20Convolutional%20Neural%20Network-Based%20Cross-View%20Gait%20Recognition&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Takemura,%20Noriko&rft.date=2019-09-01&rft.volume=29&rft.issue=9&rft.spage=2708&rft.epage=2719&rft.pages=2708-2719&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2017.2760835&rft_dat=%3Cproquest_ieee_%3E2285339086%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2285339086&rft_id=info:pmid/&rft_ieee_id=8063344&rfr_iscdi=true |