Loading…
Multi-cue fusion: Discriminative enhancing for person re-identification
•We propose a novel framework to person re-identification.•We combine Gaussian features with deep semantic features to enhance the discrimination of overall features.•Two identification losses and a verification loss are used to our fusion model.•Experimental results on benchmarks achieve the state-...
Saved in:
Published in: | Journal of visual communication and image representation 2019-01, Vol.58, p.46-52 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •We propose a novel framework to person re-identification.•We combine Gaussian features with deep semantic features to enhance the discrimination of overall features.•Two identification losses and a verification loss are used to our fusion model.•Experimental results on benchmarks achieve the state-of-the-art performance of person re-identification.
Person re-identification is an emerging research field in computer vision. Our paper aims to study how to improve the discrimination of person features. We find that some peculiarities of people have not been better attention in the semantic features of deep learning. However, some features obtained by traditional methods can better express the color, and these features are an important clue for re-identification. Therefore, in this paper, we combine traditional Gaussian features with deep semantic features to enhance the discrimination of overall features. At last, we have achieved good performance on two public datasets (Market1501 and VIPeR) in three main distance method learning (DML). In addition, we applied this model to the task of vehicle re-identification. Experiments show that our method has a great improvement on the VeRi vehicle dataset. We compare the results with the current high level results, which indicates the effectiveness of our model. |
---|---|
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2018.11.023 |