Loading…

Grayscale Enhancement Colorization Network for Visible-Infrared Person Re-Identification

Visible-infrared person re-identification (VI-ReID) is an emerging and challenging cross-modality image matching problem because of the explosive surveillance data in night-time surveillance applications. To handle the large modality gap, various generative adversarial network models have been devel...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2022-03, Vol.32 (3), p.1418-1430
Main Authors: Zhong, Xian, Lu, Tianyou, Huang, Wenxin, Ye, Mang, Jia, Xuemei, Lin, Chia-Wen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Visible-infrared person re-identification (VI-ReID) is an emerging and challenging cross-modality image matching problem because of the explosive surveillance data in night-time surveillance applications. To handle the large modality gap, various generative adversarial network models have been developed to eliminate the cross-modality variations based on a cross-modal image generation framework. However, the lack of point-wise cross-modality ground-truths makes it extremely challenging to learn such a cross-modal image generator. To address these problems, we learn the correspondence between single-channel infrared images and three-channel visible images by generating intermediate grayscale images as auxiliary information to colorize the single-modality infrared images. We propose a grayscale enhancement colorization network (GECNet) to bridge the modality gap by retaining the structure of the colored image which contains rich information. To simulate the infrared-to-visible transformation, the point-wise transformed grayscale images greatly enhance the colorization process. Our experiments conducted on two visible-infrared cross-modality person re-identification datasets demonstrate the superiority of the proposed method over the state-of-the-arts.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2021.3072171