Loading…
Exemplar Generalization in Reinforcement Learning: Improving Performance with Fewer Exemplars
This paper focuses on the generalization of exemplars (i.e., good rules) in the reinforcement learning framework and proposes Exemplar Generalization in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting...
Saved in:
Published in: | Journal of advanced computational intelligence and intelligent informatics 2009-11, Vol.13 (6), p.683-690 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper focuses on the
generalization
of
exemplars
(i.e., good rules) in the reinforcement learning framework and proposes
Exemplar Generalization
in Reinforcement Learning (EGRL) that extracts usual exemplars from a lot of exemplars provided as a prior knowledge and generalizes them by deleting unnecessary exemplars (some exemplars overlap) as much as possible. Through intensive simulation of a simple cargo layout problem to validate EGRL effectiveness, the following implications have been revealed: (1) EGRL derives good performance with fewer exemplars than using the efficient numbers of exemplars and randomly selected exemplars and (2) integration of covering, deletion, and subsumption mechanisms in EGRL is critical for improving EGRL performance and generalization. |
---|---|
ISSN: | 1343-0130 1883-8014 |
DOI: | 10.20965/jaciii.2009.p0683 |