Loading…

Progressive Meta-Learning With Curriculum

Meta-learning offers an effective solution to learn new concepts under scarce supervision through an episodic-training scheme: a series of target-like tasks sampled from base classes are sequentially fed into a meta-learner to extract cross-task knowledge, which can facilitate the quick acquisition...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2022-09, Vol.32 (9), p.5916-5930
Main Authors: Zhang, Ji, Song, Jingkuan, Gao, Lianli, Liu, Ye, Shen, Heng Tao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Meta-learning offers an effective solution to learn new concepts under scarce supervision through an episodic-training scheme: a series of target-like tasks sampled from base classes are sequentially fed into a meta-learner to extract cross-task knowledge, which can facilitate the quick acquisition of task-specific knowledge of the target task with few samples. Despite its noticeable improvements, the episodic-training strategy samples tasks randomly and uniformly, without considering their hardness and quality, which may not progressively improve the meta-leaner's generalization. In this paper, we propose Progressive Meta-learning using tasks from easy to hard. First, based on a predefined curriculum, we develop a Curriculum-Based Meta-learning (CubMeta) method. CubMeta is in a stepwise manner, and in each step, we design a BrotherNet module to establish harder tasks and an effective learning scheme for obtaining an ensemble of stronger meta-learners. Then we move a step further to propose an end-to-end Self-Paced Meta-learning (SepMeta) method. The curriculum in SepMeta is effectively integrated as a regularization term into the objective so that the meta-learner can measure the hardness of tasks adaptively, according to what the model has already learned. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed methods. Our code is available at https://github.com/nobody-777 .
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2022.3164190