Loading…

Few-Shot Learning with Complex-Valued Neural Networks and Dependable Learning

We present a flexible, general framework for few-shot learning where both inter-class differences and intra-class relationships are fully considered to improve recognition performance significantly. We introduce complex-valued convolutional neural networks (CNNs) to describe the subtle difference am...

Full description

Saved in:
Bibliographic Details
Published in:International journal of computer vision 2023, Vol.131 (1), p.385-404
Main Authors: Wang, Runqi, Liu, Zhen, Zhang, Baochang, Guo, Guodong, Doermann, David
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present a flexible, general framework for few-shot learning where both inter-class differences and intra-class relationships are fully considered to improve recognition performance significantly. We introduce complex-valued convolutional neural networks (CNNs) to describe the subtle difference among inter-class samples and Dependable Learning to capture the intra-class relationship. Conventional CNNs use only real-valued CNNs and fail to extract more detailed information. Complex-valued CNNs, on the other hand, can provide amplitude and phase information to enhance the feature representation ability based on the proposed complex metric module (CMM). Building upon the recent episodic training mechanism, CMMs can improve the representation capacity by extracting robust complex-valued features to facilitate the modeling of subtle relationships among few-shot samples. Furthermore, we use Dependable Learning as a new learning paradigm, to promote a robust model against perturbation based on a new bilinear optimization to enhance the feature extraction capacity for very few available intra-class samples. Experiments on two benchmark datasets show that the proposed methods significantly improve the performance over other approaches and achieve state-of-the-art results.
ISSN:0920-5691
1573-1405
DOI:10.1007/s11263-022-01700-x