Loading…

GaitGANv2: Invariant gait feature extraction using generative adversarial networks

The performance of gait recognition can be adversely affected by many sources of variation such as view angle, clothing, presence of and type of bag, posture, and occlusion, among others. To extract invariant gait features, we proposed a method called GaitGANv2 which is based on generative adversari...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2019-03, Vol.87, p.179-189
Main Authors: Yu, Shiqi, Liao, Rijun, An, Weizhi, Chen, Haifeng, GarcĂ­a, Edel B., Huang, Yongzhen, Poh, Norman
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The performance of gait recognition can be adversely affected by many sources of variation such as view angle, clothing, presence of and type of bag, posture, and occlusion, among others. To extract invariant gait features, we proposed a method called GaitGANv2 which is based on generative adversarial networks (GAN). In the proposed method, a GAN model is taken as a regressor to generate a canonical side view of a walking gait in normal clothing without carrying any bag. A unique advantage of this approach is that, unlike other methods, GaitGANv2 does not need to determine the view angle before generating invariant gait images. Indeed, only one model is needed to account for all possible sources of variation such as with or without carrying accessories and varying degrees of view angle. The most important computational challenge, however, is to address how to retain useful identity information when generating the invariant gait images. To this end, our approach differs from the traditional GAN in that GaitGANv2 contains two discriminators instead of one. They are respectively called fake/real discriminator and identification discriminator. While the first discriminator ensures that the generated gait images are realistic, the second one maintains the human identity information. The proposed GaitGANv2 represents an improvement over GaitGANv1 in that the former adopts a multi-loss strategy to optimize the network to increase the inter-class distance and to reduce the intra-class distance, at the same time. Experimental results show that GaitGANv2 can achieve state-of-the-art performance.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2018.10.019