Loading…
Learning compositional programs with arguments and sampling
One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those req...
Saved in:
Published in: | arXiv.org 2021-10 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | De Toni, Giovanni Erculiani, Luca Passerini, Andrea |
description | One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. Moreover, we investigate employing an Approximate version of Monte Carlo Tree Search (A-MCTS) to speed up convergence. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2568815651</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2568815651</sourcerecordid><originalsourceid>FETCH-proquest_journals_25688156513</originalsourceid><addsrcrecordid>eNqNjLsKwjAUQIMgWLT_EHAutDcmFhxFcXB0LxeNMSUvc1P8fTv4AU5nOecsWAVCdE2_A1ixmmhs2xbUHqQUFTtcNeZgg-H36FMkW2wM6HjK0WT0xD-2vDhmM3kdCnEMD07ok5uTDVs-0ZGuf1yz7fl0O16aOX5PmsowxinPNxpAqr7vpJKd-M_6AmyXOBo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2568815651</pqid></control><display><type>article</type><title>Learning compositional programs with arguments and sampling</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>De Toni, Giovanni ; Erculiani, Luca ; Passerini, Andrea</creator><creatorcontrib>De Toni, Giovanni ; Erculiani, Luca ; Passerini, Andrea</creatorcontrib><description>One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. Moreover, we investigate employing an Approximate version of Monte Carlo Tree Search (A-MCTS) to speed up convergence. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Combinatorial analysis ; Deep learning ; Domain specific languages ; Machine learning ; Software</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2568815651?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>De Toni, Giovanni</creatorcontrib><creatorcontrib>Erculiani, Luca</creatorcontrib><creatorcontrib>Passerini, Andrea</creatorcontrib><title>Learning compositional programs with arguments and sampling</title><title>arXiv.org</title><description>One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. Moreover, we investigate employing an Approximate version of Monte Carlo Tree Search (A-MCTS) to speed up convergence. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization.</description><subject>Algorithms</subject><subject>Combinatorial analysis</subject><subject>Deep learning</subject><subject>Domain specific languages</subject><subject>Machine learning</subject><subject>Software</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjLsKwjAUQIMgWLT_EHAutDcmFhxFcXB0LxeNMSUvc1P8fTv4AU5nOecsWAVCdE2_A1ixmmhs2xbUHqQUFTtcNeZgg-H36FMkW2wM6HjK0WT0xD-2vDhmM3kdCnEMD07ok5uTDVs-0ZGuf1yz7fl0O16aOX5PmsowxinPNxpAqr7vpJKd-M_6AmyXOBo</recordid><startdate>20211015</startdate><enddate>20211015</enddate><creator>De Toni, Giovanni</creator><creator>Erculiani, Luca</creator><creator>Passerini, Andrea</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211015</creationdate><title>Learning compositional programs with arguments and sampling</title><author>De Toni, Giovanni ; Erculiani, Luca ; Passerini, Andrea</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25688156513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Combinatorial analysis</topic><topic>Deep learning</topic><topic>Domain specific languages</topic><topic>Machine learning</topic><topic>Software</topic><toplevel>online_resources</toplevel><creatorcontrib>De Toni, Giovanni</creatorcontrib><creatorcontrib>Erculiani, Luca</creatorcontrib><creatorcontrib>Passerini, Andrea</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>De Toni, Giovanni</au><au>Erculiani, Luca</au><au>Passerini, Andrea</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning compositional programs with arguments and sampling</atitle><jtitle>arXiv.org</jtitle><date>2021-10-15</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. Moreover, we investigate employing an Approximate version of Monte Carlo Tree Search (A-MCTS) to speed up convergence. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2568815651 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Algorithms Combinatorial analysis Deep learning Domain specific languages Machine learning Software |
title | Learning compositional programs with arguments and sampling |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T13%3A04%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20compositional%20programs%20with%20arguments%20and%20sampling&rft.jtitle=arXiv.org&rft.au=De%20Toni,%20Giovanni&rft.date=2021-10-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2568815651%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25688156513%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2568815651&rft_id=info:pmid/&rfr_iscdi=true |