Loading…

Dynamic Model Structure Adjustment to Realize Quantum Continual Learning Based on Quantum Data

Catastrophic forgetting emerges when a neural network's parameters undergo continuous updates during the sequential training of multiple tasks. The ongoing adaptation, while enhancing the model's suitability for new tasks, inadvertently leads to a degradation in performance on previously l...

Full description

Saved in:
Bibliographic Details
Main Authors: Xu, Hailiang, Situ, Haozhen
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Catastrophic forgetting emerges when a neural network's parameters undergo continuous updates during the sequential training of multiple tasks. The ongoing adaptation, while enhancing the model's suitability for new tasks, inadvertently leads to a degradation in performance on previously learned tasks. This challenge significantly impedes the sequential learning capabilities essential for the advancement of artificial general intelligence. The phenomenon of catastrophic forgetting also occurs in the field of quantum machine learning, where parametric quantum circuits serve a role analogous to neural networks. In this study, our focus is to explore strategies for mitigating catastrophic forgetting within quantum learning models for quantum data. In this context, we employ a task-based hard attention mechanism, which automatically generates masks for each task to regulate the network's learning process and resist catastrophic forgetting. To our knowledge, the proposed method is the first one in the field of quantum machine learning to adjust the model structure to prevent catastrophic forgetting. Numerical simulation results demonstrate that our approach preserves the model's performance on previous tasks without compromising its ability to acquire new knowledge. This effective resistance to catastrophic forgetting marks a significant stride in advancing quantum continual learning.
ISSN:2379-190X
DOI:10.1109/ICASSP48485.2024.10448379