Loading…
Learning GFlowNets from partial episodes for improved convergence and stability
Generative flow networks (GFlowNets) are a family of algorithms for training a sequential sampler of discrete objects under an unnormalized target density and have been successfully used for various probabilistic modeling tasks. Existing training objectives for GFlowNets are either local to states o...
Saved in:
Published in: | arXiv.org 2023-06 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Madan, Kanika Rector-Brooks, Jarrid Korablyov, Maksym Bengio, Emmanuel Jain, Moksh Nica, Andrei Bosc, Tom Bengio, Yoshua Malkin, Nikolay |
description | Generative flow networks (GFlowNets) are a family of algorithms for training a sequential sampler of discrete objects under an unnormalized target density and have been successfully used for various probabilistic modeling tasks. Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory. We argue that these alternatives represent opposite ends of a gradient bias-variance tradeoff and propose a way to exploit this tradeoff to mitigate its harmful effects. Inspired by the TD(\(\lambda\)) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB(\(\lambda\)), a GFlowNet training objective that can learn from partial action subsequences of varying lengths. We show that SubTB(\(\lambda\)) accelerates sampler convergence in previously studied and new environments and enables training GFlowNets in environments with longer action sequences and sparser reward landscapes than what was possible before. We also perform a comparative analysis of stochastic gradient dynamics, shedding light on the bias-variance tradeoff in GFlowNet training and the advantages of subtrajectory balance. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2718477743</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2718477743</sourcerecordid><originalsourceid>FETCH-proquest_journals_27184777433</originalsourceid><addsrcrecordid>eNqNi0ELgjAYhkcQJOV_-KCzoNts3iPrEHXpLks_ZTI326bRv89DP6DTC8_zvCsSUcaypOCUbkjsfZ-mKT0ImucsIvcrSmeU6eBcavu-YfDQOjvAKF1QUgOOytsGF2odqGF0dsYGamtmdB2aGkGaBnyQT6VV-OzIupXaY_zbLdmXp8fxkizH14Q-VL2dnFlURUVWcCEEZ-y_6gtg4D9C</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2718477743</pqid></control><display><type>article</type><title>Learning GFlowNets from partial episodes for improved convergence and stability</title><source>Publicly Available Content (ProQuest)</source><creator>Madan, Kanika ; Rector-Brooks, Jarrid ; Korablyov, Maksym ; Bengio, Emmanuel ; Jain, Moksh ; Nica, Andrei ; Bosc, Tom ; Bengio, Yoshua ; Malkin, Nikolay</creator><creatorcontrib>Madan, Kanika ; Rector-Brooks, Jarrid ; Korablyov, Maksym ; Bengio, Emmanuel ; Jain, Moksh ; Nica, Andrei ; Bosc, Tom ; Bengio, Yoshua ; Malkin, Nikolay</creatorcontrib><description>Generative flow networks (GFlowNets) are a family of algorithms for training a sequential sampler of discrete objects under an unnormalized target density and have been successfully used for various probabilistic modeling tasks. Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory. We argue that these alternatives represent opposite ends of a gradient bias-variance tradeoff and propose a way to exploit this tradeoff to mitigate its harmful effects. Inspired by the TD(\(\lambda\)) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB(\(\lambda\)), a GFlowNet training objective that can learn from partial action subsequences of varying lengths. We show that SubTB(\(\lambda\)) accelerates sampler convergence in previously studied and new environments and enables training GFlowNets in environments with longer action sequences and sparser reward landscapes than what was possible before. We also perform a comparative analysis of stochastic gradient dynamics, shedding light on the bias-variance tradeoff in GFlowNet training and the advantages of subtrajectory balance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Bias ; Convergence ; Flow stability ; Machine learning ; Tradeoffs ; Training ; Variance</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2718477743?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Madan, Kanika</creatorcontrib><creatorcontrib>Rector-Brooks, Jarrid</creatorcontrib><creatorcontrib>Korablyov, Maksym</creatorcontrib><creatorcontrib>Bengio, Emmanuel</creatorcontrib><creatorcontrib>Jain, Moksh</creatorcontrib><creatorcontrib>Nica, Andrei</creatorcontrib><creatorcontrib>Bosc, Tom</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><creatorcontrib>Malkin, Nikolay</creatorcontrib><title>Learning GFlowNets from partial episodes for improved convergence and stability</title><title>arXiv.org</title><description>Generative flow networks (GFlowNets) are a family of algorithms for training a sequential sampler of discrete objects under an unnormalized target density and have been successfully used for various probabilistic modeling tasks. Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory. We argue that these alternatives represent opposite ends of a gradient bias-variance tradeoff and propose a way to exploit this tradeoff to mitigate its harmful effects. Inspired by the TD(\(\lambda\)) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB(\(\lambda\)), a GFlowNet training objective that can learn from partial action subsequences of varying lengths. We show that SubTB(\(\lambda\)) accelerates sampler convergence in previously studied and new environments and enables training GFlowNets in environments with longer action sequences and sparser reward landscapes than what was possible before. We also perform a comparative analysis of stochastic gradient dynamics, shedding light on the bias-variance tradeoff in GFlowNet training and the advantages of subtrajectory balance.</description><subject>Algorithms</subject><subject>Bias</subject><subject>Convergence</subject><subject>Flow stability</subject><subject>Machine learning</subject><subject>Tradeoffs</subject><subject>Training</subject><subject>Variance</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi0ELgjAYhkcQJOV_-KCzoNts3iPrEHXpLks_ZTI326bRv89DP6DTC8_zvCsSUcaypOCUbkjsfZ-mKT0ImucsIvcrSmeU6eBcavu-YfDQOjvAKF1QUgOOytsGF2odqGF0dsYGamtmdB2aGkGaBnyQT6VV-OzIupXaY_zbLdmXp8fxkizH14Q-VL2dnFlURUVWcCEEZ-y_6gtg4D9C</recordid><startdate>20230603</startdate><enddate>20230603</enddate><creator>Madan, Kanika</creator><creator>Rector-Brooks, Jarrid</creator><creator>Korablyov, Maksym</creator><creator>Bengio, Emmanuel</creator><creator>Jain, Moksh</creator><creator>Nica, Andrei</creator><creator>Bosc, Tom</creator><creator>Bengio, Yoshua</creator><creator>Malkin, Nikolay</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230603</creationdate><title>Learning GFlowNets from partial episodes for improved convergence and stability</title><author>Madan, Kanika ; Rector-Brooks, Jarrid ; Korablyov, Maksym ; Bengio, Emmanuel ; Jain, Moksh ; Nica, Andrei ; Bosc, Tom ; Bengio, Yoshua ; Malkin, Nikolay</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27184777433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Bias</topic><topic>Convergence</topic><topic>Flow stability</topic><topic>Machine learning</topic><topic>Tradeoffs</topic><topic>Training</topic><topic>Variance</topic><toplevel>online_resources</toplevel><creatorcontrib>Madan, Kanika</creatorcontrib><creatorcontrib>Rector-Brooks, Jarrid</creatorcontrib><creatorcontrib>Korablyov, Maksym</creatorcontrib><creatorcontrib>Bengio, Emmanuel</creatorcontrib><creatorcontrib>Jain, Moksh</creatorcontrib><creatorcontrib>Nica, Andrei</creatorcontrib><creatorcontrib>Bosc, Tom</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><creatorcontrib>Malkin, Nikolay</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Madan, Kanika</au><au>Rector-Brooks, Jarrid</au><au>Korablyov, Maksym</au><au>Bengio, Emmanuel</au><au>Jain, Moksh</au><au>Nica, Andrei</au><au>Bosc, Tom</au><au>Bengio, Yoshua</au><au>Malkin, Nikolay</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning GFlowNets from partial episodes for improved convergence and stability</atitle><jtitle>arXiv.org</jtitle><date>2023-06-03</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Generative flow networks (GFlowNets) are a family of algorithms for training a sequential sampler of discrete objects under an unnormalized target density and have been successfully used for various probabilistic modeling tasks. Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory. We argue that these alternatives represent opposite ends of a gradient bias-variance tradeoff and propose a way to exploit this tradeoff to mitigate its harmful effects. Inspired by the TD(\(\lambda\)) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB(\(\lambda\)), a GFlowNet training objective that can learn from partial action subsequences of varying lengths. We show that SubTB(\(\lambda\)) accelerates sampler convergence in previously studied and new environments and enables training GFlowNets in environments with longer action sequences and sparser reward landscapes than what was possible before. We also perform a comparative analysis of stochastic gradient dynamics, shedding light on the bias-variance tradeoff in GFlowNet training and the advantages of subtrajectory balance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2718477743 |
source | Publicly Available Content (ProQuest) |
subjects | Algorithms Bias Convergence Flow stability Machine learning Tradeoffs Training Variance |
title | Learning GFlowNets from partial episodes for improved convergence and stability |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T16%3A46%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20GFlowNets%20from%20partial%20episodes%20for%20improved%20convergence%20and%20stability&rft.jtitle=arXiv.org&rft.au=Madan,%20Kanika&rft.date=2023-06-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2718477743%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27184777433%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2718477743&rft_id=info:pmid/&rfr_iscdi=true |