Loading…
Visual Textual Alignment for Generalizable Person Re-Identification in Internet of Things
Person re-identification has gained increased attention for its important application in surveillance. Yet one of the major obstacles for the re-identification approaches deploying in practical Internet of Things systems is their weak generalization ability. In spite that the current methods have a...
Saved in:
Published in: | IEEE internet of things journal 2023-03, p.1-1 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 1 |
container_issue | |
container_start_page | 1 |
container_title | IEEE internet of things journal |
container_volume | |
creator | Liu, Xiaosheng Zhou, Zhiheng Niu, Chang Wu, Qingru |
description | Person re-identification has gained increased attention for its important application in surveillance. Yet one of the major obstacles for the re-identification approaches deploying in practical Internet of Things systems is their weak generalization ability. In spite that the current methods have a high performance under supervised setting, their discrimination ability in unseen domains meets decline. Due to the immutability of attributes among different domains, we attempt to exploit the alignment between the pedestrians' attributes and visual features to enhance our model's generalization ability. Furthermore, for the existing methods can't fully extract the attribute information, we formulate a more effective NLP-based method for attribute feature extraction. Thus, the generated features are termed as textual features and our proposed method are called Visual Textual Alignment (VTA). As for alignment, two strategies are adopted: metric learning based alignment and adversarial learning based alignment. The former is designed to adjust the metric relationship of different persons in feature space. And the latter is aimed to guide our model's domain-invariant feature learning. The experimental results demonstrate the effectiveness and superiority of our proposed method compared to the state-of-the-art methods. |
doi_str_mv | 10.1109/JIOT.2023.3263240 |
format | article |
fullrecord | <record><control><sourceid>ieee</sourceid><recordid>TN_cdi_ieee_primary_10088440</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10088440</ieee_id><sourcerecordid>10088440</sourcerecordid><originalsourceid>FETCH-ieee_primary_100884403</originalsourceid><addsrcrecordid>eNqFistKAzEUQIMgONR-QMFFfmDGm4exsxSxOt0oEoSuhqg39UomI0kKbb_eEdx3deCcw9hCQCMEtNfr7tk2EqRqlDRKajhjlVTyttbGyAs2z_kbAKb1RrSmYps3yjsXuMV9-eNdoG0cMBbux8QfMWJygY7uPSB_wZTHyF-x7j6ngzx9uEKToci7WDBFLHz03H5R3OZLdu5dyDj_54xdrR7s_VNNiNj_JBpcOvQCYLnUGtSJ_Auk7kIU</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Visual Textual Alignment for Generalizable Person Re-Identification in Internet of Things</title><source>IEEE Xplore (Online service)</source><creator>Liu, Xiaosheng ; Zhou, Zhiheng ; Niu, Chang ; Wu, Qingru</creator><creatorcontrib>Liu, Xiaosheng ; Zhou, Zhiheng ; Niu, Chang ; Wu, Qingru</creatorcontrib><description>Person re-identification has gained increased attention for its important application in surveillance. Yet one of the major obstacles for the re-identification approaches deploying in practical Internet of Things systems is their weak generalization ability. In spite that the current methods have a high performance under supervised setting, their discrimination ability in unseen domains meets decline. Due to the immutability of attributes among different domains, we attempt to exploit the alignment between the pedestrians' attributes and visual features to enhance our model's generalization ability. Furthermore, for the existing methods can't fully extract the attribute information, we formulate a more effective NLP-based method for attribute feature extraction. Thus, the generated features are termed as textual features and our proposed method are called Visual Textual Alignment (VTA). As for alignment, two strategies are adopted: metric learning based alignment and adversarial learning based alignment. The former is designed to adjust the metric relationship of different persons in feature space. And the latter is aimed to guide our model's domain-invariant feature learning. The experimental results demonstrate the effectiveness and superiority of our proposed method compared to the state-of-the-art methods.</description><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2023.3263240</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>IEEE</publisher><subject>Attribute-aided ; Data mining ; Feature extraction ; Generalizable Person Re-Identification ; Internet of Things ; Representation learning ; Task analysis ; Training ; Visual Textual Alignment ; Visualization</subject><ispartof>IEEE internet of things journal, 2023-03, p.1-1</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0003-4040-0175 ; 0000-0002-7426-1479 ; 0000-0002-6081-0158 ; 0000-0002-8734-6068</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10088440$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,778,782,27911,27912,54783</link.rule.ids></links><search><creatorcontrib>Liu, Xiaosheng</creatorcontrib><creatorcontrib>Zhou, Zhiheng</creatorcontrib><creatorcontrib>Niu, Chang</creatorcontrib><creatorcontrib>Wu, Qingru</creatorcontrib><title>Visual Textual Alignment for Generalizable Person Re-Identification in Internet of Things</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Person re-identification has gained increased attention for its important application in surveillance. Yet one of the major obstacles for the re-identification approaches deploying in practical Internet of Things systems is their weak generalization ability. In spite that the current methods have a high performance under supervised setting, their discrimination ability in unseen domains meets decline. Due to the immutability of attributes among different domains, we attempt to exploit the alignment between the pedestrians' attributes and visual features to enhance our model's generalization ability. Furthermore, for the existing methods can't fully extract the attribute information, we formulate a more effective NLP-based method for attribute feature extraction. Thus, the generated features are termed as textual features and our proposed method are called Visual Textual Alignment (VTA). As for alignment, two strategies are adopted: metric learning based alignment and adversarial learning based alignment. The former is designed to adjust the metric relationship of different persons in feature space. And the latter is aimed to guide our model's domain-invariant feature learning. The experimental results demonstrate the effectiveness and superiority of our proposed method compared to the state-of-the-art methods.</description><subject>Attribute-aided</subject><subject>Data mining</subject><subject>Feature extraction</subject><subject>Generalizable Person Re-Identification</subject><subject>Internet of Things</subject><subject>Representation learning</subject><subject>Task analysis</subject><subject>Training</subject><subject>Visual Textual Alignment</subject><subject>Visualization</subject><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNqFistKAzEUQIMgONR-QMFFfmDGm4exsxSxOt0oEoSuhqg39UomI0kKbb_eEdx3deCcw9hCQCMEtNfr7tk2EqRqlDRKajhjlVTyttbGyAs2z_kbAKb1RrSmYps3yjsXuMV9-eNdoG0cMBbux8QfMWJygY7uPSB_wZTHyF-x7j6ngzx9uEKToci7WDBFLHz03H5R3OZLdu5dyDj_54xdrR7s_VNNiNj_JBpcOvQCYLnUGtSJ_Auk7kIU</recordid><startdate>20230329</startdate><enddate>20230329</enddate><creator>Liu, Xiaosheng</creator><creator>Zhou, Zhiheng</creator><creator>Niu, Chang</creator><creator>Wu, Qingru</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><orcidid>https://orcid.org/0000-0003-4040-0175</orcidid><orcidid>https://orcid.org/0000-0002-7426-1479</orcidid><orcidid>https://orcid.org/0000-0002-6081-0158</orcidid><orcidid>https://orcid.org/0000-0002-8734-6068</orcidid></search><sort><creationdate>20230329</creationdate><title>Visual Textual Alignment for Generalizable Person Re-Identification in Internet of Things</title><author>Liu, Xiaosheng ; Zhou, Zhiheng ; Niu, Chang ; Wu, Qingru</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_100884403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Attribute-aided</topic><topic>Data mining</topic><topic>Feature extraction</topic><topic>Generalizable Person Re-Identification</topic><topic>Internet of Things</topic><topic>Representation learning</topic><topic>Task analysis</topic><topic>Training</topic><topic>Visual Textual Alignment</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Xiaosheng</creatorcontrib><creatorcontrib>Zhou, Zhiheng</creatorcontrib><creatorcontrib>Niu, Chang</creatorcontrib><creatorcontrib>Wu, Qingru</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Xiaosheng</au><au>Zhou, Zhiheng</au><au>Niu, Chang</au><au>Wu, Qingru</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visual Textual Alignment for Generalizable Person Re-Identification in Internet of Things</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2023-03-29</date><risdate>2023</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Person re-identification has gained increased attention for its important application in surveillance. Yet one of the major obstacles for the re-identification approaches deploying in practical Internet of Things systems is their weak generalization ability. In spite that the current methods have a high performance under supervised setting, their discrimination ability in unseen domains meets decline. Due to the immutability of attributes among different domains, we attempt to exploit the alignment between the pedestrians' attributes and visual features to enhance our model's generalization ability. Furthermore, for the existing methods can't fully extract the attribute information, we formulate a more effective NLP-based method for attribute feature extraction. Thus, the generated features are termed as textual features and our proposed method are called Visual Textual Alignment (VTA). As for alignment, two strategies are adopted: metric learning based alignment and adversarial learning based alignment. The former is designed to adjust the metric relationship of different persons in feature space. And the latter is aimed to guide our model's domain-invariant feature learning. The experimental results demonstrate the effectiveness and superiority of our proposed method compared to the state-of-the-art methods.</abstract><pub>IEEE</pub><doi>10.1109/JIOT.2023.3263240</doi><orcidid>https://orcid.org/0000-0003-4040-0175</orcidid><orcidid>https://orcid.org/0000-0002-7426-1479</orcidid><orcidid>https://orcid.org/0000-0002-6081-0158</orcidid><orcidid>https://orcid.org/0000-0002-8734-6068</orcidid></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2327-4662 |
ispartof | IEEE internet of things journal, 2023-03, p.1-1 |
issn | 2327-4662 |
language | eng |
recordid | cdi_ieee_primary_10088440 |
source | IEEE Xplore (Online service) |
subjects | Attribute-aided Data mining Feature extraction Generalizable Person Re-Identification Internet of Things Representation learning Task analysis Training Visual Textual Alignment Visualization |
title | Visual Textual Alignment for Generalizable Person Re-Identification in Internet of Things |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T02%3A31%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visual%20Textual%20Alignment%20for%20Generalizable%20Person%20Re-Identification%20in%20Internet%20of%20Things&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Liu,%20Xiaosheng&rft.date=2023-03-29&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2023.3263240&rft_dat=%3Cieee%3E10088440%3C/ieee%3E%3Cgrp_id%3Ecdi_FETCH-ieee_primary_100884403%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10088440&rfr_iscdi=true |