Loading…

Progress & Compress: A scalable framework for continual learning

We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2018-07
Main Authors: Schwarz, Jonathan, Luketina, Jelena, Czarnecki, Wojciech M, Grabska-Barwinska, Agnieszka, Teh, Yee Whye, Pascanu, Razvan, Hadsell, Raia
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Schwarz, Jonathan
Luketina, Jelena
Czarnecki, Wojciech M
Grabska-Barwinska, Agnieszka
Teh, Yee Whye
Pascanu, Razvan
Hadsell, Raia
description We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2073843654</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2073843654</sourcerecordid><originalsourceid>FETCH-proquest_journals_20738436543</originalsourceid><addsrcrecordid>eNqNissKwjAQAIMgWLT_sCB4K8SkLzwpRfHowXtJS1pa02zdbfH3VfADPM3AzEIESut9lMdKrUTI3EspVZqpJNGBON4IW7LMsIMCh_GrBzgB18aZylloyAz2hfSABglq9FPnZ-PAWUO-8-1GLBvj2IY_rsX2cr4X12gkfM6Wp7LHmfwnlUpmOo91msT6v-sNAx44xA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2073843654</pqid></control><display><type>article</type><title>Progress &amp; Compress: A scalable framework for continual learning</title><source>Publicly Available Content Database</source><creator>Schwarz, Jonathan ; Luketina, Jelena ; Czarnecki, Wojciech M ; Grabska-Barwinska, Agnieszka ; Teh, Yee Whye ; Pascanu, Razvan ; Hadsell, Raia</creator><creatorcontrib>Schwarz, Jonathan ; Luketina, Jelena ; Czarnecki, Wojciech M ; Grabska-Barwinska, Agnieszka ; Teh, Yee Whye ; Pascanu, Razvan ; Hadsell, Raia</creatorcontrib><description>We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress &amp; compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Domains ; Handwriting ; Knowledge base ; Maze learning ; Parameters</subject><ispartof>arXiv.org, 2018-07</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2073843654?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Schwarz, Jonathan</creatorcontrib><creatorcontrib>Luketina, Jelena</creatorcontrib><creatorcontrib>Czarnecki, Wojciech M</creatorcontrib><creatorcontrib>Grabska-Barwinska, Agnieszka</creatorcontrib><creatorcontrib>Teh, Yee Whye</creatorcontrib><creatorcontrib>Pascanu, Razvan</creatorcontrib><creatorcontrib>Hadsell, Raia</creatorcontrib><title>Progress &amp; Compress: A scalable framework for continual learning</title><title>arXiv.org</title><description>We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress &amp; compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.</description><subject>Domains</subject><subject>Handwriting</subject><subject>Knowledge base</subject><subject>Maze learning</subject><subject>Parameters</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNissKwjAQAIMgWLT_sCB4K8SkLzwpRfHowXtJS1pa02zdbfH3VfADPM3AzEIESut9lMdKrUTI3EspVZqpJNGBON4IW7LMsIMCh_GrBzgB18aZylloyAz2hfSABglq9FPnZ-PAWUO-8-1GLBvj2IY_rsX2cr4X12gkfM6Wp7LHmfwnlUpmOo91msT6v-sNAx44xA</recordid><startdate>20180702</startdate><enddate>20180702</enddate><creator>Schwarz, Jonathan</creator><creator>Luketina, Jelena</creator><creator>Czarnecki, Wojciech M</creator><creator>Grabska-Barwinska, Agnieszka</creator><creator>Teh, Yee Whye</creator><creator>Pascanu, Razvan</creator><creator>Hadsell, Raia</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20180702</creationdate><title>Progress &amp; Compress: A scalable framework for continual learning</title><author>Schwarz, Jonathan ; Luketina, Jelena ; Czarnecki, Wojciech M ; Grabska-Barwinska, Agnieszka ; Teh, Yee Whye ; Pascanu, Razvan ; Hadsell, Raia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20738436543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Domains</topic><topic>Handwriting</topic><topic>Knowledge base</topic><topic>Maze learning</topic><topic>Parameters</topic><toplevel>online_resources</toplevel><creatorcontrib>Schwarz, Jonathan</creatorcontrib><creatorcontrib>Luketina, Jelena</creatorcontrib><creatorcontrib>Czarnecki, Wojciech M</creatorcontrib><creatorcontrib>Grabska-Barwinska, Agnieszka</creatorcontrib><creatorcontrib>Teh, Yee Whye</creatorcontrib><creatorcontrib>Pascanu, Razvan</creatorcontrib><creatorcontrib>Hadsell, Raia</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schwarz, Jonathan</au><au>Luketina, Jelena</au><au>Czarnecki, Wojciech M</au><au>Grabska-Barwinska, Agnieszka</au><au>Teh, Yee Whye</au><au>Pascanu, Razvan</au><au>Hadsell, Raia</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Progress &amp; Compress: A scalable framework for continual learning</atitle><jtitle>arXiv.org</jtitle><date>2018-07-02</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress &amp; compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2018-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_2073843654
source Publicly Available Content Database
subjects Domains
Handwriting
Knowledge base
Maze learning
Parameters
title Progress & Compress: A scalable framework for continual learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T20%3A12%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Progress%20&%20Compress:%20A%20scalable%20framework%20for%20continual%20learning&rft.jtitle=arXiv.org&rft.au=Schwarz,%20Jonathan&rft.date=2018-07-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2073843654%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_20738436543%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2073843654&rft_id=info:pmid/&rfr_iscdi=true