Loading…
Autoencoder-augmented neuroevolution for visual doom playing
Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this input data. As the behavior of the agent changes the nature of the input data, the autoencoder training progresses throughout evolution. We test this method in the VizDoom environment built on the classic FPS Doom, where it performs well on a health-pack gathering task. |
---|---|
ISSN: | 2325-4289 |
DOI: | 10.1109/CIG.2017.8080408 |