Loading…

Shifted Chunk Transformer for Spatio-Temporal Representational Learning

Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation. Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models,e.g., LSTM, to learn the in...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-10
Main Authors: Zha, Xuefan, Zhu, Wentao, Lv, Tingxun, Sen, Yang, Liu, Ji
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zha, Xuefan
Zhu, Wentao
Lv, Tingxun
Sen, Yang
Liu, Ji
description Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation. Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models,e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global video clip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600, UCF101, and HMDB51.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2565275191</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2565275191</sourcerecordid><originalsourceid>FETCH-proquest_journals_25652751913</originalsourceid><addsrcrecordid>eNqNi8sKgkAUQIcgSMp_GGgt6J1Gay09Fq3SfQx0TU1npnvH_8-gD2h14HDOQkSgVJbsdwArETP3aZpCXoDWKhLnqu2agA9ZtpN9yZqM5cbRiCRnyMqb0LmkxtE7MoO8oSdktOGr7SyuaMh29rkRy8YMjPGPa7E9Hevyknhy7wk53Hs30XzwHXSuodDZIVP_VR84MTwt</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2565275191</pqid></control><display><type>article</type><title>Shifted Chunk Transformer for Spatio-Temporal Representational Learning</title><source>Access via ProQuest (Open Access)</source><creator>Zha, Xuefan ; Zhu, Wentao ; Lv, Tingxun ; Sen, Yang ; Liu, Ji</creator><creatorcontrib>Zha, Xuefan ; Zhu, Wentao ; Lv, Tingxun ; Sen, Yang ; Liu, Ji</creatorcontrib><description>Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation. Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models,e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global video clip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600, UCF101, and HMDB51.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Coders ; Feature extraction ; Image classification ; Image segmentation ; Kinetics ; Learning ; Moving object recognition ; Natural language processing ; Transformers</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2565275191?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Zha, Xuefan</creatorcontrib><creatorcontrib>Zhu, Wentao</creatorcontrib><creatorcontrib>Lv, Tingxun</creatorcontrib><creatorcontrib>Sen, Yang</creatorcontrib><creatorcontrib>Liu, Ji</creatorcontrib><title>Shifted Chunk Transformer for Spatio-Temporal Representational Learning</title><title>arXiv.org</title><description>Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation. Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models,e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global video clip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600, UCF101, and HMDB51.</description><subject>Ablation</subject><subject>Coders</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image segmentation</subject><subject>Kinetics</subject><subject>Learning</subject><subject>Moving object recognition</subject><subject>Natural language processing</subject><subject>Transformers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi8sKgkAUQIcgSMp_GGgt6J1Gay09Fq3SfQx0TU1npnvH_8-gD2h14HDOQkSgVJbsdwArETP3aZpCXoDWKhLnqu2agA9ZtpN9yZqM5cbRiCRnyMqb0LmkxtE7MoO8oSdktOGr7SyuaMh29rkRy8YMjPGPa7E9Hevyknhy7wk53Hs30XzwHXSuodDZIVP_VR84MTwt</recordid><startdate>20211029</startdate><enddate>20211029</enddate><creator>Zha, Xuefan</creator><creator>Zhu, Wentao</creator><creator>Lv, Tingxun</creator><creator>Sen, Yang</creator><creator>Liu, Ji</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211029</creationdate><title>Shifted Chunk Transformer for Spatio-Temporal Representational Learning</title><author>Zha, Xuefan ; Zhu, Wentao ; Lv, Tingxun ; Sen, Yang ; Liu, Ji</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25652751913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Ablation</topic><topic>Coders</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image segmentation</topic><topic>Kinetics</topic><topic>Learning</topic><topic>Moving object recognition</topic><topic>Natural language processing</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Zha, Xuefan</creatorcontrib><creatorcontrib>Zhu, Wentao</creatorcontrib><creatorcontrib>Lv, Tingxun</creatorcontrib><creatorcontrib>Sen, Yang</creatorcontrib><creatorcontrib>Liu, Ji</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zha, Xuefan</au><au>Zhu, Wentao</au><au>Lv, Tingxun</au><au>Sen, Yang</au><au>Liu, Ji</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Shifted Chunk Transformer for Spatio-Temporal Representational Learning</atitle><jtitle>arXiv.org</jtitle><date>2021-10-29</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation. Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models,e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global video clip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600, UCF101, and HMDB51.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2565275191
source Access via ProQuest (Open Access)
subjects Ablation
Coders
Feature extraction
Image classification
Image segmentation
Kinetics
Learning
Moving object recognition
Natural language processing
Transformers
title Shifted Chunk Transformer for Spatio-Temporal Representational Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T10%3A31%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Shifted%20Chunk%20Transformer%20for%20Spatio-Temporal%20Representational%20Learning&rft.jtitle=arXiv.org&rft.au=Zha,%20Xuefan&rft.date=2021-10-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2565275191%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25652751913%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2565275191&rft_id=info:pmid/&rfr_iscdi=true