Loading…

Towards interpretable and robust hand detection via pixel-wise prediction

•An interpretable hand detection method to predict hand regions at pixel resolution.•Highlight the discriminative features among multiple layers.•A more transparent representation of rotation angle: rotation map.•Use auxiliary losses to accelerate the convergence of the network. The lack of interpre...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2020-09, Vol.105, p.107202, Article 107202
Main Authors: Liu, Dan, Zhang, Libo, Luo, Tiejian, Tao, Lili, Wu, Yanjun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•An interpretable hand detection method to predict hand regions at pixel resolution.•Highlight the discriminative features among multiple layers.•A more transparent representation of rotation angle: rotation map.•Use auxiliary losses to accelerate the convergence of the network. The lack of interpretability of existing CNN-based hand detection methods makes it difficult to understand the rationale behind their predictions. In this paper, we propose a novel neural network model, which introduces interpretability into hand detection for the first time. The main improvements include: (1) Detect hands at pixel level to explain what pixels are the basis for its decision and improve transparency of the model. (2) The explainable Highlight Feature Fusion block highlights distinctive features among multiple layers and learns discriminative ones to gain robust performance. (3) We introduce a transparent representation, the rotation map, to learn rotation features instead of complex and non-transparent rotation and derotation layers. (4) Auxiliary supervision accelerates the training process, which saves more than 10 h in our experiments. Experimental results on the VIVA and Oxford hand detection and tracking datasets show competitive accuracy of our method compared with state-of-the-art methods with higher speed. Models and code are available: https://isrc.iscas.ac.cn/gitlab/research/pr2020-phdn.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2020.107202