Loading…

Weakly-Supervised Part-Attention and Mentored Networks for Vehicle Re-Identification

Vehicle re-identification (Re-ID) aims to retrieve images with the same vehicle ID across different cameras. Current part-level feature learning methods typically detect vehicle parts via uniform division, outside tools, or attention modeling. However, such part features often require expensive addi...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2022-12, Vol.32 (12), p.8887-8898
Main Authors: Tang, Lisha, Wang, Yi, Chau, Lap-Pui
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3
cites cdi_FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3
container_end_page 8898
container_issue 12
container_start_page 8887
container_title IEEE transactions on circuits and systems for video technology
container_volume 32
creator Tang, Lisha
Wang, Yi
Chau, Lap-Pui
description Vehicle re-identification (Re-ID) aims to retrieve images with the same vehicle ID across different cameras. Current part-level feature learning methods typically detect vehicle parts via uniform division, outside tools, or attention modeling. However, such part features often require expensive additional annotations and cause sub-optimal performance in case of unreliable part mask predictions. In this paper, we propose a weakly-supervised Part-Attention Network (PANet) and Part-Mentored Network (PMNet) for Vehicle Re-ID. Firstly, PANet localizes vehicle parts via part-relevant channel recalibration and cluster-based mask generation without vehicle part supervisory information. Secondly, PMNet leverages teacher-student guided learning to distill vehicle part-specific features from PANet and performs multi-scale global-part feature extraction. During inference, PMNet can adaptively extract discriminative part features without part localization by PANet, preventing unstable part mask predictions. We address this Re-ID issue as a multi-task problem and adopt Homoscedastic Uncertainty to learn the optimal weighing of ID losses. Experiments are conducted on two public benchmarks, showing that our approach outperforms recent methods, which require no extra annotations by an average increase of 3.0% in CMC@5 on VehicleID and over 1.4% in mAP on VeRi776. Moreover, our method can extend to the occluded vehicle Re-ID task and exhibits good generalization ability.
doi_str_mv 10.1109/TCSVT.2022.3197844
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2747611374</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9853621</ieee_id><sourcerecordid>2747611374</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3</originalsourceid><addsrcrecordid>eNo9kFtLAzEQhYMoWKt_QF8WfE7NPbuPpXgp1At2qY8h3Z3gtnW3JqnSf2_Wii8zA-d8M8xB6JKSEaWkuCkn80U5YoSxEaeFzoU4QgMqZY4ZI_I4zURSnDMqT9FZCCtCqMiFHqDyDex6s8fz3Rb8VxOgzl6sj3gcI7Sx6drMtnX2mObOJ-0J4nfn1yFznc8W8N5UG8heAU_r3u2ayvbMOTpxdhPg4q8PUXl3W04e8Oz5fjoZz3DFChmxZZrxVGqp3ZIXSxBOVSp3RBLqammtUtKJqua9qJZaO5W-YUoD6LoAPkTXh7Vb333uIESz6na-TRcN00IrSrkWycUOrsp3IXhwZuubD-v3hhLTh2d-wzN9eOYvvARdHaAGAP6BIpdcMcp_ABC_a-s</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2747611374</pqid></control><display><type>article</type><title>Weakly-Supervised Part-Attention and Mentored Networks for Vehicle Re-Identification</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Tang, Lisha ; Wang, Yi ; Chau, Lap-Pui</creator><creatorcontrib>Tang, Lisha ; Wang, Yi ; Chau, Lap-Pui</creatorcontrib><description>Vehicle re-identification (Re-ID) aims to retrieve images with the same vehicle ID across different cameras. Current part-level feature learning methods typically detect vehicle parts via uniform division, outside tools, or attention modeling. However, such part features often require expensive additional annotations and cause sub-optimal performance in case of unreliable part mask predictions. In this paper, we propose a weakly-supervised Part-Attention Network (PANet) and Part-Mentored Network (PMNet) for Vehicle Re-ID. Firstly, PANet localizes vehicle parts via part-relevant channel recalibration and cluster-based mask generation without vehicle part supervisory information. Secondly, PMNet leverages teacher-student guided learning to distill vehicle part-specific features from PANet and performs multi-scale global-part feature extraction. During inference, PMNet can adaptively extract discriminative part features without part localization by PANet, preventing unstable part mask predictions. We address this Re-ID issue as a multi-task problem and adopt Homoscedastic Uncertainty to learn the optimal weighing of ID losses. Experiments are conducted on two public benchmarks, showing that our approach outperforms recent methods, which require no extra annotations by an average increase of 3.0% in CMC@5 on VehicleID and over 1.4% in mAP on VeRi776. Moreover, our method can extend to the occluded vehicle Re-ID task and exhibits good generalization ability.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3197844</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Annotations ; attention ; Clutter ; Feature extraction ; Lighting ; Location awareness ; Machine learning ; multi-task learning ; Representation learning ; Vehicle re-identification ; weak supervision</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-12, Vol.32 (12), p.8887-8898</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3</citedby><cites>FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3</cites><orcidid>0000-0003-4932-0593 ; 0000-0003-0907-1561 ; 0000-0001-8659-4724</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9853621$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Tang, Lisha</creatorcontrib><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Chau, Lap-Pui</creatorcontrib><title>Weakly-Supervised Part-Attention and Mentored Networks for Vehicle Re-Identification</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Vehicle re-identification (Re-ID) aims to retrieve images with the same vehicle ID across different cameras. Current part-level feature learning methods typically detect vehicle parts via uniform division, outside tools, or attention modeling. However, such part features often require expensive additional annotations and cause sub-optimal performance in case of unreliable part mask predictions. In this paper, we propose a weakly-supervised Part-Attention Network (PANet) and Part-Mentored Network (PMNet) for Vehicle Re-ID. Firstly, PANet localizes vehicle parts via part-relevant channel recalibration and cluster-based mask generation without vehicle part supervisory information. Secondly, PMNet leverages teacher-student guided learning to distill vehicle part-specific features from PANet and performs multi-scale global-part feature extraction. During inference, PMNet can adaptively extract discriminative part features without part localization by PANet, preventing unstable part mask predictions. We address this Re-ID issue as a multi-task problem and adopt Homoscedastic Uncertainty to learn the optimal weighing of ID losses. Experiments are conducted on two public benchmarks, showing that our approach outperforms recent methods, which require no extra annotations by an average increase of 3.0% in CMC@5 on VehicleID and over 1.4% in mAP on VeRi776. Moreover, our method can extend to the occluded vehicle Re-ID task and exhibits good generalization ability.</description><subject>Annotations</subject><subject>attention</subject><subject>Clutter</subject><subject>Feature extraction</subject><subject>Lighting</subject><subject>Location awareness</subject><subject>Machine learning</subject><subject>multi-task learning</subject><subject>Representation learning</subject><subject>Vehicle re-identification</subject><subject>weak supervision</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kFtLAzEQhYMoWKt_QF8WfE7NPbuPpXgp1At2qY8h3Z3gtnW3JqnSf2_Wii8zA-d8M8xB6JKSEaWkuCkn80U5YoSxEaeFzoU4QgMqZY4ZI_I4zURSnDMqT9FZCCtCqMiFHqDyDex6s8fz3Rb8VxOgzl6sj3gcI7Sx6drMtnX2mObOJ-0J4nfn1yFznc8W8N5UG8heAU_r3u2ayvbMOTpxdhPg4q8PUXl3W04e8Oz5fjoZz3DFChmxZZrxVGqp3ZIXSxBOVSp3RBLqammtUtKJqua9qJZaO5W-YUoD6LoAPkTXh7Vb333uIESz6na-TRcN00IrSrkWycUOrsp3IXhwZuubD-v3hhLTh2d-wzN9eOYvvARdHaAGAP6BIpdcMcp_ABC_a-s</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Tang, Lisha</creator><creator>Wang, Yi</creator><creator>Chau, Lap-Pui</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-4932-0593</orcidid><orcidid>https://orcid.org/0000-0003-0907-1561</orcidid><orcidid>https://orcid.org/0000-0001-8659-4724</orcidid></search><sort><creationdate>20221201</creationdate><title>Weakly-Supervised Part-Attention and Mentored Networks for Vehicle Re-Identification</title><author>Tang, Lisha ; Wang, Yi ; Chau, Lap-Pui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Annotations</topic><topic>attention</topic><topic>Clutter</topic><topic>Feature extraction</topic><topic>Lighting</topic><topic>Location awareness</topic><topic>Machine learning</topic><topic>multi-task learning</topic><topic>Representation learning</topic><topic>Vehicle re-identification</topic><topic>weak supervision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tang, Lisha</creatorcontrib><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Chau, Lap-Pui</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tang, Lisha</au><au>Wang, Yi</au><au>Chau, Lap-Pui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Weakly-Supervised Part-Attention and Mentored Networks for Vehicle Re-Identification</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>32</volume><issue>12</issue><spage>8887</spage><epage>8898</epage><pages>8887-8898</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Vehicle re-identification (Re-ID) aims to retrieve images with the same vehicle ID across different cameras. Current part-level feature learning methods typically detect vehicle parts via uniform division, outside tools, or attention modeling. However, such part features often require expensive additional annotations and cause sub-optimal performance in case of unreliable part mask predictions. In this paper, we propose a weakly-supervised Part-Attention Network (PANet) and Part-Mentored Network (PMNet) for Vehicle Re-ID. Firstly, PANet localizes vehicle parts via part-relevant channel recalibration and cluster-based mask generation without vehicle part supervisory information. Secondly, PMNet leverages teacher-student guided learning to distill vehicle part-specific features from PANet and performs multi-scale global-part feature extraction. During inference, PMNet can adaptively extract discriminative part features without part localization by PANet, preventing unstable part mask predictions. We address this Re-ID issue as a multi-task problem and adopt Homoscedastic Uncertainty to learn the optimal weighing of ID losses. Experiments are conducted on two public benchmarks, showing that our approach outperforms recent methods, which require no extra annotations by an average increase of 3.0% in CMC@5 on VehicleID and over 1.4% in mAP on VeRi776. Moreover, our method can extend to the occluded vehicle Re-ID task and exhibits good generalization ability.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3197844</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-4932-0593</orcidid><orcidid>https://orcid.org/0000-0003-0907-1561</orcidid><orcidid>https://orcid.org/0000-0001-8659-4724</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2022-12, Vol.32 (12), p.8887-8898
issn 1051-8215
1558-2205
language eng
recordid cdi_proquest_journals_2747611374
source IEEE Electronic Library (IEL) Journals
subjects Annotations
attention
Clutter
Feature extraction
Lighting
Location awareness
Machine learning
multi-task learning
Representation learning
Vehicle re-identification
weak supervision
title Weakly-Supervised Part-Attention and Mentored Networks for Vehicle Re-Identification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A07%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Weakly-Supervised%20Part-Attention%20and%20Mentored%20Networks%20for%20Vehicle%20Re-Identification&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Tang,%20Lisha&rft.date=2022-12-01&rft.volume=32&rft.issue=12&rft.spage=8887&rft.epage=8898&rft.pages=8887-8898&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3197844&rft_dat=%3Cproquest_ieee_%3E2747611374%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c295t-a2723a27d57fb39be4f6c68f0501fd5aa665f4cd3b39b6b77f6220267ee7d9e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2747611374&rft_id=info:pmid/&rft_ieee_id=9853621&rfr_iscdi=true