Loading…

Illumination Unification for Person Re-Identification

The performance of person re-identification (re-ID) is easily affected by illumination variations caused by different shooting times, places and cameras. Existing illumination-adaptive methods usually require annotating cross-camera pedestrians on each illumination scale, which is unaffordable for a...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2022-10, Vol.32 (10), p.6766-6777
Main Authors: Zhang, Guoqing, Luo, Zhiyuan, Chen, Yuhao, Zheng, Yuhui, Lin, Weisi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733
cites cdi_FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733
container_end_page 6777
container_issue 10
container_start_page 6766
container_title IEEE transactions on circuits and systems for video technology
container_volume 32
creator Zhang, Guoqing
Luo, Zhiyuan
Chen, Yuhao
Zheng, Yuhui
Lin, Weisi
description The performance of person re-identification (re-ID) is easily affected by illumination variations caused by different shooting times, places and cameras. Existing illumination-adaptive methods usually require annotating cross-camera pedestrians on each illumination scale, which is unaffordable for a long-term person retrieval system. The cross-illumination person retrieval problem presents a great challenge for accurate person matching. In this paper, we propose a novel method to tackle this task, which only needs to annotate pedestrians on one illumination scale. Specifically, (i) we propose a novel Illumination Estimation and Restoring framework (IER) to estimate the illumination scale of testing images taken at different illumination conditions and restore them to the illumination scale of training images, such that the disparities between training images with uniform illumination and testing images with varying illuminations are reduced. IER achieves promising results on illumination-adaptive dataset and proving itself a proper baseline for cross-illumination person re-ID. (ii) we propose a Mixed Training strategy using both Original and Reconstructed images (MTOR) to further improve model performance. We generate reconstructed images that are consistent with the original training images in content but more similar to the restored images in style. The reconstructed images are combined with the original training images for supervised training to further reduce the domain gap between original training images and restored testing images. To verify the effectiveness of our method, some simulated illumination-adaptive datasets are constructed with various illumination conditions. Extensive experimental results on the simulated datasets validate the effectiveness of the proposed method. The source code is available at https://github.com/FadeOrigin/IUReId .
doi_str_mv 10.1109/TCSVT.2022.3169422
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCSVT_2022_3169422</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9761930</ieee_id><sourcerecordid>2721428178</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733</originalsourceid><addsrcrecordid>eNo9kEFLAzEQhYMoWKt_QC8Fz1tnJskme5SitVBQtPUastkEtrS7Ndke_Pdu3dLTPJj3Zh4fY_cIU0Qonlazr-_VlIBoyjEvBNEFG6GUOiMCedlrkJhpQnnNblLaAKDQQo2YXGy3h13d2K5um8m6qUPtBh3aOPnwMfXy02eLyjfdeXnLroLdJn93mmO2fn1Zzd6y5ft8MXteZo5IdhkJUeai5KUH0FoHxaXTVV5UmlSprAhVX9FqbhVUzmEuPAKQskE6EKA4H7PH4e4-tj8HnzqzaQ-x6V8aUoSCNCrdu2hwudimFH0w-1jvbPw1COaIx_zjMUc85oSnDz0Modp7fw4UKseCA_8DNZlfyQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2721428178</pqid></control><display><type>article</type><title>Illumination Unification for Person Re-Identification</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zhang, Guoqing ; Luo, Zhiyuan ; Chen, Yuhao ; Zheng, Yuhui ; Lin, Weisi</creator><creatorcontrib>Zhang, Guoqing ; Luo, Zhiyuan ; Chen, Yuhao ; Zheng, Yuhui ; Lin, Weisi</creatorcontrib><description>The performance of person re-identification (re-ID) is easily affected by illumination variations caused by different shooting times, places and cameras. Existing illumination-adaptive methods usually require annotating cross-camera pedestrians on each illumination scale, which is unaffordable for a long-term person retrieval system. The cross-illumination person retrieval problem presents a great challenge for accurate person matching. In this paper, we propose a novel method to tackle this task, which only needs to annotate pedestrians on one illumination scale. Specifically, (i) we propose a novel Illumination Estimation and Restoring framework (IER) to estimate the illumination scale of testing images taken at different illumination conditions and restore them to the illumination scale of training images, such that the disparities between training images with uniform illumination and testing images with varying illuminations are reduced. IER achieves promising results on illumination-adaptive dataset and proving itself a proper baseline for cross-illumination person re-ID. (ii) we propose a Mixed Training strategy using both Original and Reconstructed images (MTOR) to further improve model performance. We generate reconstructed images that are consistent with the original training images in content but more similar to the restored images in style. The reconstructed images are combined with the original training images for supervised training to further reduce the domain gap between original training images and restored testing images. To verify the effectiveness of our method, some simulated illumination-adaptive datasets are constructed with various illumination conditions. Extensive experimental results on the simulated datasets validate the effectiveness of the proposed method. The source code is available at https://github.com/FadeOrigin/IUReId .</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3169422</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Cameras ; Datasets ; Effectiveness ; generative adversarial network ; Illumination ; illumination-adaptive ; Image reconstruction ; Image restoration ; Lighting ; Pedestrians ; Person re-identification ; Retrieval ; Source code ; Task analysis ; Testing ; Training</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-10, Vol.32 (10), p.6766-6777</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733</citedby><cites>FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733</cites><orcidid>0000-0001-6518-8890 ; 0000-0002-1709-3093 ; 0000-0001-9866-1947 ; 0000-0003-3842-6632 ; 0000-0002-8741-8607</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9761930$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zhang, Guoqing</creatorcontrib><creatorcontrib>Luo, Zhiyuan</creatorcontrib><creatorcontrib>Chen, Yuhao</creatorcontrib><creatorcontrib>Zheng, Yuhui</creatorcontrib><creatorcontrib>Lin, Weisi</creatorcontrib><title>Illumination Unification for Person Re-Identification</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>The performance of person re-identification (re-ID) is easily affected by illumination variations caused by different shooting times, places and cameras. Existing illumination-adaptive methods usually require annotating cross-camera pedestrians on each illumination scale, which is unaffordable for a long-term person retrieval system. The cross-illumination person retrieval problem presents a great challenge for accurate person matching. In this paper, we propose a novel method to tackle this task, which only needs to annotate pedestrians on one illumination scale. Specifically, (i) we propose a novel Illumination Estimation and Restoring framework (IER) to estimate the illumination scale of testing images taken at different illumination conditions and restore them to the illumination scale of training images, such that the disparities between training images with uniform illumination and testing images with varying illuminations are reduced. IER achieves promising results on illumination-adaptive dataset and proving itself a proper baseline for cross-illumination person re-ID. (ii) we propose a Mixed Training strategy using both Original and Reconstructed images (MTOR) to further improve model performance. We generate reconstructed images that are consistent with the original training images in content but more similar to the restored images in style. The reconstructed images are combined with the original training images for supervised training to further reduce the domain gap between original training images and restored testing images. To verify the effectiveness of our method, some simulated illumination-adaptive datasets are constructed with various illumination conditions. Extensive experimental results on the simulated datasets validate the effectiveness of the proposed method. The source code is available at https://github.com/FadeOrigin/IUReId .</description><subject>Cameras</subject><subject>Datasets</subject><subject>Effectiveness</subject><subject>generative adversarial network</subject><subject>Illumination</subject><subject>illumination-adaptive</subject><subject>Image reconstruction</subject><subject>Image restoration</subject><subject>Lighting</subject><subject>Pedestrians</subject><subject>Person re-identification</subject><subject>Retrieval</subject><subject>Source code</subject><subject>Task analysis</subject><subject>Testing</subject><subject>Training</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kEFLAzEQhYMoWKt_QC8Fz1tnJskme5SitVBQtPUastkEtrS7Ndke_Pdu3dLTPJj3Zh4fY_cIU0Qonlazr-_VlIBoyjEvBNEFG6GUOiMCedlrkJhpQnnNblLaAKDQQo2YXGy3h13d2K5um8m6qUPtBh3aOPnwMfXy02eLyjfdeXnLroLdJn93mmO2fn1Zzd6y5ft8MXteZo5IdhkJUeai5KUH0FoHxaXTVV5UmlSprAhVX9FqbhVUzmEuPAKQskE6EKA4H7PH4e4-tj8HnzqzaQ-x6V8aUoSCNCrdu2hwudimFH0w-1jvbPw1COaIx_zjMUc85oSnDz0Modp7fw4UKseCA_8DNZlfyQ</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Zhang, Guoqing</creator><creator>Luo, Zhiyuan</creator><creator>Chen, Yuhao</creator><creator>Zheng, Yuhui</creator><creator>Lin, Weisi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6518-8890</orcidid><orcidid>https://orcid.org/0000-0002-1709-3093</orcidid><orcidid>https://orcid.org/0000-0001-9866-1947</orcidid><orcidid>https://orcid.org/0000-0003-3842-6632</orcidid><orcidid>https://orcid.org/0000-0002-8741-8607</orcidid></search><sort><creationdate>20221001</creationdate><title>Illumination Unification for Person Re-Identification</title><author>Zhang, Guoqing ; Luo, Zhiyuan ; Chen, Yuhao ; Zheng, Yuhui ; Lin, Weisi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Cameras</topic><topic>Datasets</topic><topic>Effectiveness</topic><topic>generative adversarial network</topic><topic>Illumination</topic><topic>illumination-adaptive</topic><topic>Image reconstruction</topic><topic>Image restoration</topic><topic>Lighting</topic><topic>Pedestrians</topic><topic>Person re-identification</topic><topic>Retrieval</topic><topic>Source code</topic><topic>Task analysis</topic><topic>Testing</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Guoqing</creatorcontrib><creatorcontrib>Luo, Zhiyuan</creatorcontrib><creatorcontrib>Chen, Yuhao</creatorcontrib><creatorcontrib>Zheng, Yuhui</creatorcontrib><creatorcontrib>Lin, Weisi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Guoqing</au><au>Luo, Zhiyuan</au><au>Chen, Yuhao</au><au>Zheng, Yuhui</au><au>Lin, Weisi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Illumination Unification for Person Re-Identification</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-10-01</date><risdate>2022</risdate><volume>32</volume><issue>10</issue><spage>6766</spage><epage>6777</epage><pages>6766-6777</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>The performance of person re-identification (re-ID) is easily affected by illumination variations caused by different shooting times, places and cameras. Existing illumination-adaptive methods usually require annotating cross-camera pedestrians on each illumination scale, which is unaffordable for a long-term person retrieval system. The cross-illumination person retrieval problem presents a great challenge for accurate person matching. In this paper, we propose a novel method to tackle this task, which only needs to annotate pedestrians on one illumination scale. Specifically, (i) we propose a novel Illumination Estimation and Restoring framework (IER) to estimate the illumination scale of testing images taken at different illumination conditions and restore them to the illumination scale of training images, such that the disparities between training images with uniform illumination and testing images with varying illuminations are reduced. IER achieves promising results on illumination-adaptive dataset and proving itself a proper baseline for cross-illumination person re-ID. (ii) we propose a Mixed Training strategy using both Original and Reconstructed images (MTOR) to further improve model performance. We generate reconstructed images that are consistent with the original training images in content but more similar to the restored images in style. The reconstructed images are combined with the original training images for supervised training to further reduce the domain gap between original training images and restored testing images. To verify the effectiveness of our method, some simulated illumination-adaptive datasets are constructed with various illumination conditions. Extensive experimental results on the simulated datasets validate the effectiveness of the proposed method. The source code is available at https://github.com/FadeOrigin/IUReId .</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3169422</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-6518-8890</orcidid><orcidid>https://orcid.org/0000-0002-1709-3093</orcidid><orcidid>https://orcid.org/0000-0001-9866-1947</orcidid><orcidid>https://orcid.org/0000-0003-3842-6632</orcidid><orcidid>https://orcid.org/0000-0002-8741-8607</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2022-10, Vol.32 (10), p.6766-6777
issn 1051-8215
1558-2205
language eng
recordid cdi_crossref_primary_10_1109_TCSVT_2022_3169422
source IEEE Electronic Library (IEL) Journals
subjects Cameras
Datasets
Effectiveness
generative adversarial network
Illumination
illumination-adaptive
Image reconstruction
Image restoration
Lighting
Pedestrians
Person re-identification
Retrieval
Source code
Task analysis
Testing
Training
title Illumination Unification for Person Re-Identification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T18%3A13%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Illumination%20Unification%20for%20Person%20Re-Identification&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Zhang,%20Guoqing&rft.date=2022-10-01&rft.volume=32&rft.issue=10&rft.spage=6766&rft.epage=6777&rft.pages=6766-6777&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3169422&rft_dat=%3Cproquest_cross%3E2721428178%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c225t-244b64b3be00888f735c8d69d827b7a4fd155a83a70dcc164e10027af5c040733%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2721428178&rft_id=info:pmid/&rft_ieee_id=9761930&rfr_iscdi=true