Loading…
MotionTrack: Learning Motion Predictor for Multiple Object Tracking
Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challe...
Saved in:
Published in: | arXiv.org 2024-03 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Xiao, Changcheng Cao, Qiong Zhong, Yujie Long, Lan Zhang, Xiang Luo, Zhigang Tao, Dacheng |
description | Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT. In this context, we introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor that relies solely on object trajectory information. This predictor comprehensively integrates two levels of granularity in motion features to enhance the modeling of temporal dynamics and facilitate precise future motion prediction for individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT, characterized by highly complex object motion. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2822886311</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2822886311</sourcerecordid><originalsourceid>FETCH-proquest_journals_28228863113</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw9s0vyczPCylKTM62UvBJTSzKy8xLV4CIKgQUpaZkJpfkFymkAbFvaU5JZkFOqoJ_UlZqcokCWBNQNQ8Da1piTnEqL5TmZlB2cw1x9tAtKMovLE0tLonPyi8tygNKxRtZGBlZWJgZGxoaE6cKAO2xOg4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2822886311</pqid></control><display><type>article</type><title>MotionTrack: Learning Motion Predictor for Multiple Object Tracking</title><source>Publicly Available Content Database</source><creator>Xiao, Changcheng ; Cao, Qiong ; Zhong, Yujie ; Long, Lan ; Zhang, Xiang ; Luo, Zhigang ; Tao, Dacheng</creator><creatorcontrib>Xiao, Changcheng ; Cao, Qiong ; Zhong, Yujie ; Long, Lan ; Zhang, Xiang ; Luo, Zhigang ; Tao, Dacheng</creatorcontrib><description>Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT. In this context, we introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor that relies solely on object trajectory information. This predictor comprehensively integrates two levels of granularity in motion features to enhance the modeling of temporal dynamics and facilitate precise future motion prediction for individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT, characterized by highly complex object motion.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Multiple target tracking ; Object motion</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2822886311?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Xiao, Changcheng</creatorcontrib><creatorcontrib>Cao, Qiong</creatorcontrib><creatorcontrib>Zhong, Yujie</creatorcontrib><creatorcontrib>Long, Lan</creatorcontrib><creatorcontrib>Zhang, Xiang</creatorcontrib><creatorcontrib>Luo, Zhigang</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><title>MotionTrack: Learning Motion Predictor for Multiple Object Tracking</title><title>arXiv.org</title><description>Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT. In this context, we introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor that relies solely on object trajectory information. This predictor comprehensively integrates two levels of granularity in motion features to enhance the modeling of temporal dynamics and facilitate precise future motion prediction for individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT, characterized by highly complex object motion.</description><subject>Datasets</subject><subject>Multiple target tracking</subject><subject>Object motion</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw9s0vyczPCylKTM62UvBJTSzKy8xLV4CIKgQUpaZkJpfkFymkAbFvaU5JZkFOqoJ_UlZqcokCWBNQNQ8Da1piTnEqL5TmZlB2cw1x9tAtKMovLE0tLonPyi8tygNKxRtZGBlZWJgZGxoaE6cKAO2xOg4</recordid><startdate>20240311</startdate><enddate>20240311</enddate><creator>Xiao, Changcheng</creator><creator>Cao, Qiong</creator><creator>Zhong, Yujie</creator><creator>Long, Lan</creator><creator>Zhang, Xiang</creator><creator>Luo, Zhigang</creator><creator>Tao, Dacheng</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240311</creationdate><title>MotionTrack: Learning Motion Predictor for Multiple Object Tracking</title><author>Xiao, Changcheng ; Cao, Qiong ; Zhong, Yujie ; Long, Lan ; Zhang, Xiang ; Luo, Zhigang ; Tao, Dacheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28228863113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Multiple target tracking</topic><topic>Object motion</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiao, Changcheng</creatorcontrib><creatorcontrib>Cao, Qiong</creatorcontrib><creatorcontrib>Zhong, Yujie</creatorcontrib><creatorcontrib>Long, Lan</creatorcontrib><creatorcontrib>Zhang, Xiang</creatorcontrib><creatorcontrib>Luo, Zhigang</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiao, Changcheng</au><au>Cao, Qiong</au><au>Zhong, Yujie</au><au>Long, Lan</au><au>Zhang, Xiang</au><au>Luo, Zhigang</au><au>Tao, Dacheng</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MotionTrack: Learning Motion Predictor for Multiple Object Tracking</atitle><jtitle>arXiv.org</jtitle><date>2024-03-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT. In this context, we introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor that relies solely on object trajectory information. This predictor comprehensively integrates two levels of granularity in motion features to enhance the modeling of temporal dynamics and facilitate precise future motion prediction for individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT, characterized by highly complex object motion.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2822886311 |
source | Publicly Available Content Database |
subjects | Datasets Multiple target tracking Object motion |
title | MotionTrack: Learning Motion Predictor for Multiple Object Tracking |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T12%3A12%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MotionTrack:%20Learning%20Motion%20Predictor%20for%20Multiple%20Object%20Tracking&rft.jtitle=arXiv.org&rft.au=Xiao,%20Changcheng&rft.date=2024-03-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2822886311%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28228863113%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2822886311&rft_id=info:pmid/&rfr_iscdi=true |