Loading…
Adaptive Progressive Continual Learning
Continual learning paradigm learns from a continuous stream of tasks in an incremental manner and aims to overcome the notorious issue: the catastrophic forgetting. In this work, we propose a new adaptive progressive network framework including two models for continual learning: Reinforced Continual...
Saved in:
Published in: | IEEE transactions on pattern analysis and machine intelligence 2022-10, Vol.44 (10), p.6715-6728 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Continual learning paradigm learns from a continuous stream of tasks in an incremental manner and aims to overcome the notorious issue: the catastrophic forgetting. In this work, we propose a new adaptive progressive network framework including two models for continual learning: Reinforced Continual Learning (RCL) and Bayesian Optimized Continual Learning with Attention mechanism (BOCL) to solve this fundamental issue. The core idea of this framework is to dynamically and adaptively expand the neural network structure upon the arrival of new tasks. RCL and BOCL employ reinforcement learning and Bayesian optimization to achieve it, respectively. An outstanding advantage of our proposed framework is that it will not forget the knowledge that has been learned through adaptively controlling the architecture. We propose effective ways of employing the learned knowledge in the two methods to control the size of the network. RCL employs previous knowledge directly while BOCL selectively utilizes previous knowledge (e.g., feature maps of previous tasks) via attention mechanism. The experiments on variants of MNIST, CIFAR-100 and Sequence of 5-Datasets demonstrate that our methods outperform the state-of-the-art in preventing catastrophic forgetting and fitting new tasks better under the same or less computing resource. |
---|---|
ISSN: | 0162-8828 2160-9292 1939-3539 |
DOI: | 10.1109/TPAMI.2021.3095064 |