Loading…

Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars

This paper focuses on the generalization of exemplars (i.e., good rules) in the reinforcement learning framework and proposes Exemplar Generalization in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting...

Full description

Saved in:
Bibliographic Details
Published in:Journal of advanced computational intelligence and intelligent informatics 2009-11, Vol.13 (6), p.683-690
Main Authors: Matsushima, Hiroyasu, Hattori, Kiyohiko, Takadama, Keiki
Format: Article
Language:English
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c172t-b7846b4bbed0c61559f8bb90bd78e643d40c855632a14290cc54dd12570803a3
container_end_page 690
container_issue 6
container_start_page 683
container_title Journal of advanced computational intelligence and intelligent informatics
container_volume 13
creator Matsushima, Hiroyasu
Hattori, Kiyohiko
Takadama, Keiki
description This paper focuses on the generalization of exemplars (i.e., good rules) in the reinforcement learning framework and proposes Exemplar Generalization in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting unnecessary exemplars (some exemplars overlap) as much as possible. Through intensive simulation of a simple cargo layout problem to validate EGRL effectiveness, the following implications have been revealed: (1) EGRL derives good performance with fewer exemplars than using the efficient numbers of exemplars and randomly selected exemplars and (2) integration of covering, deletion, and subsumption mechanisms in EGRL is critical for improving EGRL performance and generalization.
doi_str_mv 10.20965/jaciii.2009.p0683
format article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_20965_jaciii_2009_p0683</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_20965_jaciii_2009_p0683</sourcerecordid><originalsourceid>FETCH-LOGICAL-c172t-b7846b4bbed0c61559f8bb90bd78e643d40c855632a14290cc54dd12570803a3</originalsourceid><addsrcrecordid>eNo1kM1Kw0AUhQdRsNS-gKt5gdQ7v5m4k9LWQkGRbiXMTG50SjMJk2DVpze2ujrfgcNZfITcMphzKLS621sfQhgLFPMOtBEXZMKMEZkBJi9HFlJkwARck1nf7wFG5hokm5DX5Sc23cEmusaIyR7Ctx1CG2mI9AVDrNvkscE40C3aFEN8u6ebpkvtx4j0GdM4aGz0SI9heKcrPGKi_5_9Dbmq7aHH2V9OyW613C0es-3TerN42Gae5XzIXG6kdtI5rMBrplRRG-cKcFVuUEtRSfBGKS24ZZIX4L2SVcW4ysGAsGJK-PnWp7bvE9Zll0Jj01fJoDwpKs-Kyl9F5UmR-AF-9Vy1</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars</title><source>DOAJ Directory of Open Access Journals</source><creator>Matsushima, Hiroyasu ; Hattori, Kiyohiko ; Takadama, Keiki</creator><creatorcontrib>Matsushima, Hiroyasu ; Hattori, Kiyohiko ; Takadama, Keiki ; PRESTO, Japan Science and Technology Agency (JST), 4-1-8 Honcho Kawaguchi, Saitama 332-0012, Japan ; The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo 182-8585, Japan</creatorcontrib><description>This paper focuses on the generalization of exemplars (i.e., good rules) in the reinforcement learning framework and proposes Exemplar Generalization in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting unnecessary exemplars (some exemplars overlap) as much as possible. Through intensive simulation of a simple cargo layout problem to validate EGRL effectiveness, the following implications have been revealed: (1) EGRL derives good performance with fewer exemplars than using the efficient numbers of exemplars and randomly selected exemplars and (2) integration of covering, deletion, and subsumption mechanisms in EGRL is critical for improving EGRL performance and generalization.</description><identifier>ISSN: 1343-0130</identifier><identifier>EISSN: 1883-8014</identifier><identifier>DOI: 10.20965/jaciii.2009.p0683</identifier><language>eng</language><ispartof>Journal of advanced computational intelligence and intelligent informatics, 2009-11, Vol.13 (6), p.683-690</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c172t-b7846b4bbed0c61559f8bb90bd78e643d40c855632a14290cc54dd12570803a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27924,27925</link.rule.ids></links><search><creatorcontrib>Matsushima, Hiroyasu</creatorcontrib><creatorcontrib>Hattori, Kiyohiko</creatorcontrib><creatorcontrib>Takadama, Keiki</creatorcontrib><creatorcontrib>PRESTO, Japan Science and Technology Agency (JST), 4-1-8 Honcho Kawaguchi, Saitama 332-0012, Japan</creatorcontrib><creatorcontrib>The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo 182-8585, Japan</creatorcontrib><title>Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars</title><title>Journal of advanced computational intelligence and intelligent informatics</title><description>This paper focuses on the generalization of exemplars (i.e., good rules) in the reinforcement learning framework and proposes Exemplar Generalization in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting unnecessary exemplars (some exemplars overlap) as much as possible. Through intensive simulation of a simple cargo layout problem to validate EGRL effectiveness, the following implications have been revealed: (1) EGRL derives good performance with fewer exemplars than using the efficient numbers of exemplars and randomly selected exemplars and (2) integration of covering, deletion, and subsumption mechanisms in EGRL is critical for improving EGRL performance and generalization.</description><issn>1343-0130</issn><issn>1883-8014</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2009</creationdate><recordtype>article</recordtype><recordid>eNo1kM1Kw0AUhQdRsNS-gKt5gdQ7v5m4k9LWQkGRbiXMTG50SjMJk2DVpze2ujrfgcNZfITcMphzKLS621sfQhgLFPMOtBEXZMKMEZkBJi9HFlJkwARck1nf7wFG5hokm5DX5Sc23cEmusaIyR7Ctx1CG2mI9AVDrNvkscE40C3aFEN8u6ebpkvtx4j0GdM4aGz0SI9heKcrPGKi_5_9Dbmq7aHH2V9OyW613C0es-3TerN42Gae5XzIXG6kdtI5rMBrplRRG-cKcFVuUEtRSfBGKS24ZZIX4L2SVcW4ysGAsGJK-PnWp7bvE9Zll0Jj01fJoDwpKs-Kyl9F5UmR-AF-9Vy1</recordid><startdate>20091120</startdate><enddate>20091120</enddate><creator>Matsushima, Hiroyasu</creator><creator>Hattori, Kiyohiko</creator><creator>Takadama, Keiki</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20091120</creationdate><title>Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars</title><author>Matsushima, Hiroyasu ; Hattori, Kiyohiko ; Takadama, Keiki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c172t-b7846b4bbed0c61559f8bb90bd78e643d40c855632a14290cc54dd12570803a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2009</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Matsushima, Hiroyasu</creatorcontrib><creatorcontrib>Hattori, Kiyohiko</creatorcontrib><creatorcontrib>Takadama, Keiki</creatorcontrib><creatorcontrib>PRESTO, Japan Science and Technology Agency (JST), 4-1-8 Honcho Kawaguchi, Saitama 332-0012, Japan</creatorcontrib><creatorcontrib>The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo 182-8585, Japan</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of advanced computational intelligence and intelligent informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Matsushima, Hiroyasu</au><au>Hattori, Kiyohiko</au><au>Takadama, Keiki</au><aucorp>PRESTO, Japan Science and Technology Agency (JST), 4-1-8 Honcho Kawaguchi, Saitama 332-0012, Japan</aucorp><aucorp>The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo 182-8585, Japan</aucorp><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars</atitle><jtitle>Journal of advanced computational intelligence and intelligent informatics</jtitle><date>2009-11-20</date><risdate>2009</risdate><volume>13</volume><issue>6</issue><spage>683</spage><epage>690</epage><pages>683-690</pages><issn>1343-0130</issn><eissn>1883-8014</eissn><abstract>This paper focuses on the generalization of exemplars (i.e., good rules) in the reinforcement learning framework and proposes Exemplar Generalization in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting unnecessary exemplars (some exemplars overlap) as much as possible. Through intensive simulation of a simple cargo layout problem to validate EGRL effectiveness, the following implications have been revealed: (1) EGRL derives good performance with fewer exemplars than using the efficient numbers of exemplars and randomly selected exemplars and (2) integration of covering, deletion, and subsumption mechanisms in EGRL is critical for improving EGRL performance and generalization.</abstract><doi>10.20965/jaciii.2009.p0683</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1343-0130
ispartof Journal of advanced computational intelligence and intelligent informatics, 2009-11, Vol.13 (6), p.683-690
issn 1343-0130
1883-8014
language eng
recordid cdi_crossref_primary_10_20965_jaciii_2009_p0683
source DOAJ Directory of Open Access Journals
title Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T23%3A53%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Exemplar%20Generalization%20in%20Reinforcement%20Learning:%20Improving%20Performance%20with%20Fewer%20Exemplars&rft.jtitle=Journal%20of%20advanced%20computational%20intelligence%20and%20intelligent%20informatics&rft.au=Matsushima,%20Hiroyasu&rft.aucorp=PRESTO,%20Japan%20Science%20and%20Technology%20Agency%20(JST),%204-1-8%20Honcho%20Kawaguchi,%20Saitama%20332-0012,%20Japan&rft.date=2009-11-20&rft.volume=13&rft.issue=6&rft.spage=683&rft.epage=690&rft.pages=683-690&rft.issn=1343-0130&rft.eissn=1883-8014&rft_id=info:doi/10.20965/jaciii.2009.p0683&rft_dat=%3Ccrossref%3E10_20965_jaciii_2009_p0683%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c172t-b7846b4bbed0c61559f8bb90bd78e643d40c855632a14290cc54dd12570803a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true