Loading…
Integrated generalized zero-shot learning for fine-grained classification
•Integrates embedding learning (EL) and feature synthesizing (FS) styles of GZSL.•New attention module to explore distinctive local features for fine-grained GZSL.•Novel mutual learning by minimizing losses between EL and adversarial FS.•New similarity score using mutual information of seen and unse...
Saved in:
Published in: | Pattern recognition 2022-02, Vol.122, p.108246, Article 108246 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Integrates embedding learning (EL) and feature synthesizing (FS) styles of GZSL.•New attention module to explore distinctive local features for fine-grained GZSL.•Novel mutual learning by minimizing losses between EL and adversarial FS.•New similarity score using mutual information of seen and unseen domain semantics.•Superior empirical performance on benchmark datasets.
Embedding learning (EL) and feature synthesizing (FS) are two of the popular categories of fine-grained GZSL methods. EL or FS using global features cannot discriminate fine details in the absence of local features. On the other hand, EL or FS methods exploiting local features either neglect direct attribute guidance or global information. Consequently, neither method performs well. In this paper, we propose to explore global and direct attribute-supervised local visual features for both EL and FS categories in an integrated manner for fine-grained GZSL. The proposed integrated network has an EL sub-network and a FS sub-network. Consequently, the proposed integrated network can be tested in two ways. We propose a novel two-step dense attention mechanism to discover attribute-guided local visual features. We introduce new mutual learning between the sub-networks to exploit mutually beneficial information for optimization. Moreover, we propose to compute source-target class similarity based on mutual information and transfer-learn the target classes to reduce bias towards the source domain during testing. We demonstrate that our proposed method outperforms contemporary methods on benchmark datasets. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2021.108246 |