Loading…

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks

While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-02
Main Authors: Lee, Hae Beom, Lee, Hayeon, Na, Donghyun, Kim, Saehoon, Park, Minseop, Yang, Eunho, Hwang, Sung Ju
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Lee, Hae Beom
Lee, Hayeon
Na, Donghyun
Kim, Saehoon
Park, Minseop
Yang, Eunho
Hwang, Sung Ju
description While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2232980606</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2232980606</sourcerecordid><originalsourceid>FETCH-proquest_journals_22329806063</originalsourceid><addsrcrecordid>eNqNyrEKwjAUheEgCBbtOwScA_HG1uqoKAqKSwe3cmtTSa2J5qaDb6-gODudH87XYxEoNRHZFGDAYqJGSgnpDJJERey01-itsRceHF9ii_asF-94ajJo-UEHFD9SO893t_KjKo624scuCFeLylDwpuyCcZbnSFcasX6NLen4u0M23qzz1VbcvXt0mkLRuM7b91UAKJhnMpWp-k-9ACdRQTM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2232980606</pqid></control><display><type>article</type><title>Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks</title><source>Publicly Available Content Database</source><creator>Lee, Hae Beom ; Lee, Hayeon ; Na, Donghyun ; Kim, Saehoon ; Park, Minseop ; Yang, Eunho ; Hwang, Sung Ju</creator><creatorcontrib>Lee, Hae Beom ; Lee, Hayeon ; Na, Donghyun ; Kim, Saehoon ; Park, Minseop ; Yang, Eunho ; Hwang, Sung Ju</creatorcontrib><description>While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bayesian analysis ; Datasets ; Machine learning ; Statistical inference</subject><ispartof>arXiv.org, 2022-02</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2232980606?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>777,781,25735,36994,44572</link.rule.ids></links><search><creatorcontrib>Lee, Hae Beom</creatorcontrib><creatorcontrib>Lee, Hayeon</creatorcontrib><creatorcontrib>Na, Donghyun</creatorcontrib><creatorcontrib>Kim, Saehoon</creatorcontrib><creatorcontrib>Park, Minseop</creatorcontrib><creatorcontrib>Yang, Eunho</creatorcontrib><creatorcontrib>Hwang, Sung Ju</creatorcontrib><title>Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks</title><title>arXiv.org</title><description>While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.</description><subject>Bayesian analysis</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Statistical inference</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyrEKwjAUheEgCBbtOwScA_HG1uqoKAqKSwe3cmtTSa2J5qaDb6-gODudH87XYxEoNRHZFGDAYqJGSgnpDJJERey01-itsRceHF9ii_asF-94ajJo-UEHFD9SO893t_KjKo624scuCFeLylDwpuyCcZbnSFcasX6NLen4u0M23qzz1VbcvXt0mkLRuM7b91UAKJhnMpWp-k-9ACdRQTM</recordid><startdate>20220212</startdate><enddate>20220212</enddate><creator>Lee, Hae Beom</creator><creator>Lee, Hayeon</creator><creator>Na, Donghyun</creator><creator>Kim, Saehoon</creator><creator>Park, Minseop</creator><creator>Yang, Eunho</creator><creator>Hwang, Sung Ju</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220212</creationdate><title>Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks</title><author>Lee, Hae Beom ; Lee, Hayeon ; Na, Donghyun ; Kim, Saehoon ; Park, Minseop ; Yang, Eunho ; Hwang, Sung Ju</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22329806063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Bayesian analysis</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Statistical inference</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Hae Beom</creatorcontrib><creatorcontrib>Lee, Hayeon</creatorcontrib><creatorcontrib>Na, Donghyun</creatorcontrib><creatorcontrib>Kim, Saehoon</creatorcontrib><creatorcontrib>Park, Minseop</creatorcontrib><creatorcontrib>Yang, Eunho</creatorcontrib><creatorcontrib>Hwang, Sung Ju</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Hae Beom</au><au>Lee, Hayeon</au><au>Na, Donghyun</au><au>Kim, Saehoon</au><au>Park, Minseop</au><au>Yang, Eunho</au><au>Hwang, Sung Ju</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks</atitle><jtitle>arXiv.org</jtitle><date>2022-02-12</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2232980606
source Publicly Available Content Database
subjects Bayesian analysis
Datasets
Machine learning
Statistical inference
title Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T09%3A34%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20to%20Balance:%20Bayesian%20Meta-Learning%20for%20Imbalanced%20and%20Out-of-distribution%20Tasks&rft.jtitle=arXiv.org&rft.au=Lee,%20Hae%20Beom&rft.date=2022-02-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2232980606%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22329806063%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2232980606&rft_id=info:pmid/&rfr_iscdi=true