Loading…
Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation
We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks. Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of o...
Saved in:
Published in: | arXiv.org 2022-04 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Kang, Minsoo Park, Jaeyoo Han, Bohyung |
description | We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks. Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models while adjusting to new tasks effectively. The proposed method estimates the relationship between the representation changes and the resulting loss increases incurred by model updates. It minimizes the upper bound of the loss increases using the representations, which exploits the estimated importance of each feature map within a backbone model. Based on the importance, the model restricts updates of important features for robustness while allowing changes in less critical features for flexibility. This optimization strategy effectively alleviates the notorious catastrophic forgetting problem despite the limited accessibility of data in the previous tasks. The experimental results show significant accuracy improvement of the proposed algorithm over the existing methods on the standard datasets. Code is available. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2647059520</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2647059520</sourcerecordid><originalsourceid>FETCH-proquest_journals_26470595203</originalsourceid><addsrcrecordid>eNqNyrEKwjAUQNEgCBbtPwScCzFpWh2lWhQdHdxKtM-aEpOalyr-vSJ-gNMd7hmQiAsxS-Yp5yMSI7aMMZ7lXEoRkWNhFGKytWcPN7BBGboH5a22DT296M66p4G6AbrSGLQxKmhn6VOHK13Wqgv6AbQEFXoPtHAWndH110zI8KIMQvzrmEzL9aHYJJ139x4wVK3rvf2simdpzuRCcib-U2_RzEJp</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2647059520</pqid></control><display><type>article</type><title>Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation</title><source>Publicly Available Content (ProQuest)</source><creator>Kang, Minsoo ; Park, Jaeyoo ; Han, Bohyung</creator><creatorcontrib>Kang, Minsoo ; Park, Jaeyoo ; Han, Bohyung</creatorcontrib><description>We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks. Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models while adjusting to new tasks effectively. The proposed method estimates the relationship between the representation changes and the resulting loss increases incurred by model updates. It minimizes the upper bound of the loss increases using the representations, which exploits the estimated importance of each feature map within a backbone model. Based on the importance, the model restricts updates of important features for robustness while allowing changes in less critical features for flexibility. This optimization strategy effectively alleviates the notorious catastrophic forgetting problem despite the limited accessibility of data in the previous tasks. The experimental results show significant accuracy improvement of the proposed algorithm over the existing methods on the standard datasets. Code is available.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial neural networks ; Distillation ; Feature maps ; Machine learning ; Memory tasks ; Optimization ; Representations ; Upper bounds</subject><ispartof>arXiv.org, 2022-04</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2647059520?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Kang, Minsoo</creatorcontrib><creatorcontrib>Park, Jaeyoo</creatorcontrib><creatorcontrib>Han, Bohyung</creatorcontrib><title>Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation</title><title>arXiv.org</title><description>We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks. Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models while adjusting to new tasks effectively. The proposed method estimates the relationship between the representation changes and the resulting loss increases incurred by model updates. It minimizes the upper bound of the loss increases using the representations, which exploits the estimated importance of each feature map within a backbone model. Based on the importance, the model restricts updates of important features for robustness while allowing changes in less critical features for flexibility. This optimization strategy effectively alleviates the notorious catastrophic forgetting problem despite the limited accessibility of data in the previous tasks. The experimental results show significant accuracy improvement of the proposed algorithm over the existing methods on the standard datasets. Code is available.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Distillation</subject><subject>Feature maps</subject><subject>Machine learning</subject><subject>Memory tasks</subject><subject>Optimization</subject><subject>Representations</subject><subject>Upper bounds</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyrEKwjAUQNEgCBbtPwScCzFpWh2lWhQdHdxKtM-aEpOalyr-vSJ-gNMd7hmQiAsxS-Yp5yMSI7aMMZ7lXEoRkWNhFGKytWcPN7BBGboH5a22DT296M66p4G6AbrSGLQxKmhn6VOHK13Wqgv6AbQEFXoPtHAWndH110zI8KIMQvzrmEzL9aHYJJ139x4wVK3rvf2simdpzuRCcib-U2_RzEJp</recordid><startdate>20220402</startdate><enddate>20220402</enddate><creator>Kang, Minsoo</creator><creator>Park, Jaeyoo</creator><creator>Han, Bohyung</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220402</creationdate><title>Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation</title><author>Kang, Minsoo ; Park, Jaeyoo ; Han, Bohyung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26470595203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Distillation</topic><topic>Feature maps</topic><topic>Machine learning</topic><topic>Memory tasks</topic><topic>Optimization</topic><topic>Representations</topic><topic>Upper bounds</topic><toplevel>online_resources</toplevel><creatorcontrib>Kang, Minsoo</creatorcontrib><creatorcontrib>Park, Jaeyoo</creatorcontrib><creatorcontrib>Han, Bohyung</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kang, Minsoo</au><au>Park, Jaeyoo</au><au>Han, Bohyung</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation</atitle><jtitle>arXiv.org</jtitle><date>2022-04-02</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks. Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models while adjusting to new tasks effectively. The proposed method estimates the relationship between the representation changes and the resulting loss increases incurred by model updates. It minimizes the upper bound of the loss increases using the representations, which exploits the estimated importance of each feature map within a backbone model. Based on the importance, the model restricts updates of important features for robustness while allowing changes in less critical features for flexibility. This optimization strategy effectively alleviates the notorious catastrophic forgetting problem despite the limited accessibility of data in the previous tasks. The experimental results show significant accuracy improvement of the proposed algorithm over the existing methods on the standard datasets. Code is available.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2647059520 |
source | Publicly Available Content (ProQuest) |
subjects | Algorithms Artificial neural networks Distillation Feature maps Machine learning Memory tasks Optimization Representations Upper bounds |
title | Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T22%3A54%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Class-Incremental%20Learning%20by%20Knowledge%20Distillation%20with%20Adaptive%20Feature%20Consolidation&rft.jtitle=arXiv.org&rft.au=Kang,%20Minsoo&rft.date=2022-04-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2647059520%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26470595203%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2647059520&rft_id=info:pmid/&rfr_iscdi=true |