Loading…

Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers

Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal inf...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence 2023-11, Vol.45 (11), p.12783-12797
Main Authors: Zhu, Tianyu, Hiller, Markus, Ehsanpour, Mahsa, Ma, Rongkai, Drummond, Tom, Reid, Ian, Rezatofighi, Hamid
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33
cites cdi_FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33
container_end_page 12797
container_issue 11
container_start_page 12783
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 45
creator Zhu, Tianyu
Hiller, Markus
Ehsanpour, Mahsa
Ma, Rongkai
Drummond, Tom
Reid, Ian
Rezatofighi, Hamid
description Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal information. To address these shortcomings, we present MO3TR: a truly end-to-end Transformer-based online multi-object tracking (MOT) framework that learns to handle occlusions, track initiation and termination without the need for an explicit data association module or any heuristics. MO3TR encodes object interactions into long-term temporal embeddings using a combination of spatial and temporal Transformers, and recursively uses the information jointly with the input data to estimate the states of all tracked objects over time. The spatial attention mechanism enables our framework to learn implicit representations between all the objects and the objects to the measurements, while the temporal attention mechanism focuses on specific parts of past information, allowing our approach to resolve occlusions over multiple frames. Our experiments demonstrate the potential of this new approach, achieving results on par with or better than the current state-of-the-art on multiple MOT metrics for several popular multi-object tracking benchmarks.
doi_str_mv 10.1109/TPAMI.2022.3213073
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2872443924</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9914676</ieee_id><sourcerecordid>2723814569</sourcerecordid><originalsourceid>FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33</originalsourceid><addsrcrecordid>eNpdkcFu2zAMhoVhxZJle4HtYmCXXpxKpCVbu6VFsxZI0QJNz4Is0YMz28okB0Xffk5T7NALCQLfRxD8Gfsm-FIIri-2D6u72yVwgCWCQF7iBzYXGnWOEvVHNudCQV5VUM3Y55R2nItCcvzEZqhASCxxztwmhD_t8Du7pJcw-Gz7HLJ1tD2ln9n14PMx5FPL7g7d2Ob39Y7cmG2jda_OUzrWx70dW9tl9qhTvw9xGiZmSE2IPcX0hZ01tkv09a0v2NP6ent1k2_uf91erTa5wxLGXFruvbfO1WTBeusb8JXkUlHtG6WErF0tS3KNLC0WBOhVLVGSFhYaZxEX7Py0dx_D3wOl0fRtctR1dqBwSAZKwGp6gdIT-uMduguHOEzXGahKKArUUEwUnCgXQ0qRGrOPbW_jixHcHCMwrxGYYwTmLYJJ-n6SWiL6L2gtClUq_Ad5qIF6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2872443924</pqid></control><display><type>article</type><title>Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers</title><source>IEEE Xplore (Online service)</source><creator>Zhu, Tianyu ; Hiller, Markus ; Ehsanpour, Mahsa ; Ma, Rongkai ; Drummond, Tom ; Reid, Ian ; Rezatofighi, Hamid</creator><creatorcontrib>Zhu, Tianyu ; Hiller, Markus ; Ehsanpour, Mahsa ; Ma, Rongkai ; Drummond, Tom ; Reid, Ian ; Rezatofighi, Hamid</creatorcontrib><description>Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal information. To address these shortcomings, we present MO3TR: a truly end-to-end Transformer-based online multi-object tracking (MOT) framework that learns to handle occlusions, track initiation and termination without the need for an explicit data association module or any heuristics. MO3TR encodes object interactions into long-term temporal embeddings using a combination of spatial and temporal Transformers, and recursively uses the information jointly with the input data to estimate the states of all tracked objects over time. The spatial attention mechanism enables our framework to learn implicit representations between all the objects and the objects to the measurements, while the temporal attention mechanism focuses on specific parts of past information, allowing our approach to resolve occlusions over multiple frames. Our experiments demonstrate the potential of this new approach, achieving results on par with or better than the current state-of-the-art on multiple MOT metrics for several popular multi-object tracking benchmarks.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2022.3213073</identifier><identifier>PMID: 36215373</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>end-to-end learning ; Feature extraction ; Frames (data processing) ; History ; Multi-object tracking ; Multiple target tracking ; Object recognition ; Occlusion ; pedestrian tracking ; spatio-temporal model ; Task analysis ; Tracking ; transformer ; Transformers ; Visualization</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2023-11, Vol.45 (11), p.12783-12797</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33</citedby><cites>FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33</cites><orcidid>0000-0002-8659-8773 ; 0000-0002-8133-0102 ; 0000-0002-6735-0553 ; 0000-0001-7790-6423</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9914676$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,778,782,27911,27912,54783</link.rule.ids></links><search><creatorcontrib>Zhu, Tianyu</creatorcontrib><creatorcontrib>Hiller, Markus</creatorcontrib><creatorcontrib>Ehsanpour, Mahsa</creatorcontrib><creatorcontrib>Ma, Rongkai</creatorcontrib><creatorcontrib>Drummond, Tom</creatorcontrib><creatorcontrib>Reid, Ian</creatorcontrib><creatorcontrib>Rezatofighi, Hamid</creatorcontrib><title>Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><description>Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal information. To address these shortcomings, we present MO3TR: a truly end-to-end Transformer-based online multi-object tracking (MOT) framework that learns to handle occlusions, track initiation and termination without the need for an explicit data association module or any heuristics. MO3TR encodes object interactions into long-term temporal embeddings using a combination of spatial and temporal Transformers, and recursively uses the information jointly with the input data to estimate the states of all tracked objects over time. The spatial attention mechanism enables our framework to learn implicit representations between all the objects and the objects to the measurements, while the temporal attention mechanism focuses on specific parts of past information, allowing our approach to resolve occlusions over multiple frames. Our experiments demonstrate the potential of this new approach, achieving results on par with or better than the current state-of-the-art on multiple MOT metrics for several popular multi-object tracking benchmarks.</description><subject>end-to-end learning</subject><subject>Feature extraction</subject><subject>Frames (data processing)</subject><subject>History</subject><subject>Multi-object tracking</subject><subject>Multiple target tracking</subject><subject>Object recognition</subject><subject>Occlusion</subject><subject>pedestrian tracking</subject><subject>spatio-temporal model</subject><subject>Task analysis</subject><subject>Tracking</subject><subject>transformer</subject><subject>Transformers</subject><subject>Visualization</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpdkcFu2zAMhoVhxZJle4HtYmCXXpxKpCVbu6VFsxZI0QJNz4Is0YMz28okB0Xffk5T7NALCQLfRxD8Gfsm-FIIri-2D6u72yVwgCWCQF7iBzYXGnWOEvVHNudCQV5VUM3Y55R2nItCcvzEZqhASCxxztwmhD_t8Du7pJcw-Gz7HLJ1tD2ln9n14PMx5FPL7g7d2Ob39Y7cmG2jda_OUzrWx70dW9tl9qhTvw9xGiZmSE2IPcX0hZ01tkv09a0v2NP6ent1k2_uf91erTa5wxLGXFruvbfO1WTBeusb8JXkUlHtG6WErF0tS3KNLC0WBOhVLVGSFhYaZxEX7Py0dx_D3wOl0fRtctR1dqBwSAZKwGp6gdIT-uMduguHOEzXGahKKArUUEwUnCgXQ0qRGrOPbW_jixHcHCMwrxGYYwTmLYJJ-n6SWiL6L2gtClUq_Ad5qIF6</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Zhu, Tianyu</creator><creator>Hiller, Markus</creator><creator>Ehsanpour, Mahsa</creator><creator>Ma, Rongkai</creator><creator>Drummond, Tom</creator><creator>Reid, Ian</creator><creator>Rezatofighi, Hamid</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8659-8773</orcidid><orcidid>https://orcid.org/0000-0002-8133-0102</orcidid><orcidid>https://orcid.org/0000-0002-6735-0553</orcidid><orcidid>https://orcid.org/0000-0001-7790-6423</orcidid></search><sort><creationdate>20231101</creationdate><title>Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers</title><author>Zhu, Tianyu ; Hiller, Markus ; Ehsanpour, Mahsa ; Ma, Rongkai ; Drummond, Tom ; Reid, Ian ; Rezatofighi, Hamid</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>end-to-end learning</topic><topic>Feature extraction</topic><topic>Frames (data processing)</topic><topic>History</topic><topic>Multi-object tracking</topic><topic>Multiple target tracking</topic><topic>Object recognition</topic><topic>Occlusion</topic><topic>pedestrian tracking</topic><topic>spatio-temporal model</topic><topic>Task analysis</topic><topic>Tracking</topic><topic>transformer</topic><topic>Transformers</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Tianyu</creatorcontrib><creatorcontrib>Hiller, Markus</creatorcontrib><creatorcontrib>Ehsanpour, Mahsa</creatorcontrib><creatorcontrib>Ma, Rongkai</creatorcontrib><creatorcontrib>Drummond, Tom</creatorcontrib><creatorcontrib>Reid, Ian</creatorcontrib><creatorcontrib>Rezatofighi, Hamid</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhu, Tianyu</au><au>Hiller, Markus</au><au>Ehsanpour, Mahsa</au><au>Ma, Rongkai</au><au>Drummond, Tom</au><au>Reid, Ian</au><au>Rezatofighi, Hamid</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>45</volume><issue>11</issue><spage>12783</spage><epage>12797</epage><pages>12783-12797</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal information. To address these shortcomings, we present MO3TR: a truly end-to-end Transformer-based online multi-object tracking (MOT) framework that learns to handle occlusions, track initiation and termination without the need for an explicit data association module or any heuristics. MO3TR encodes object interactions into long-term temporal embeddings using a combination of spatial and temporal Transformers, and recursively uses the information jointly with the input data to estimate the states of all tracked objects over time. The spatial attention mechanism enables our framework to learn implicit representations between all the objects and the objects to the measurements, while the temporal attention mechanism focuses on specific parts of past information, allowing our approach to resolve occlusions over multiple frames. Our experiments demonstrate the potential of this new approach, achieving results on par with or better than the current state-of-the-art on multiple MOT metrics for several popular multi-object tracking benchmarks.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>36215373</pmid><doi>10.1109/TPAMI.2022.3213073</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-8659-8773</orcidid><orcidid>https://orcid.org/0000-0002-8133-0102</orcidid><orcidid>https://orcid.org/0000-0002-6735-0553</orcidid><orcidid>https://orcid.org/0000-0001-7790-6423</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2023-11, Vol.45 (11), p.12783-12797
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_proquest_journals_2872443924
source IEEE Xplore (Online service)
subjects end-to-end learning
Feature extraction
Frames (data processing)
History
Multi-object tracking
Multiple target tracking
Object recognition
Occlusion
pedestrian tracking
spatio-temporal model
Task analysis
Tracking
transformer
Transformers
Visualization
title Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T04%3A42%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Looking%20Beyond%20Two%20Frames:%20End-to-End%20Multi-Object%20Tracking%20Using%20Spatial%20and%20Temporal%20Transformers&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Zhu,%20Tianyu&rft.date=2023-11-01&rft.volume=45&rft.issue=11&rft.spage=12783&rft.epage=12797&rft.pages=12783-12797&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2022.3213073&rft_dat=%3Cproquest_cross%3E2723814569%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c372t-5a0dddaccbea2adadf2d85056ebdf6615bcb57ecf57a34e23d6b535e91a2fca33%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2872443924&rft_id=info:pmid/36215373&rft_ieee_id=9914676&rfr_iscdi=true