Loading…

Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing

Deep Reinforcement Learning (DRL) has produced great achievements since it was proposed, including the possibility of processing raw vision input data. However, training an agent to perform tasks based on image feedback remains a challenge. It requires the processing of large amounts of data from hi...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-04
Main Authors: Kich, Victor Augusto, Junior Costa de Jesus, Grando, Ricardo Bedin, Kolling, Alisson Henrique, Heisler, Gabriel Vinícius, Rodrigo da Silva Guerra
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Kich, Victor Augusto
Junior Costa de Jesus
Grando, Ricardo Bedin
Kolling, Alisson Henrique
Heisler, Gabriel Vinícius
Rodrigo da Silva Guerra
description Deep Reinforcement Learning (DRL) has produced great achievements since it was proposed, including the possibility of processing raw vision input data. However, training an agent to perform tasks based on image feedback remains a challenge. It requires the processing of large amounts of data from high-dimensional observation spaces, frame by frame, and the agent's actions are computed according to deep neural network policies, end-to-end. Image pre-processing is an effective way of reducing these high dimensional spaces, eliminating unnecessary information present in the scene, supporting the extraction of features and their representations in the agent's neural network. Modern video-games are examples of this type of challenge for DRL algorithms because of their visual complexity. In this paper, we propose a low-dimensional observation filter that allows a deep Q-network agent to successfully play in a visually complex and modern video-game, called Neon Drive.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2655320954</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2655320954</sourcerecordid><originalsourceid>FETCH-proquest_journals_26553209543</originalsourceid><addsrcrecordid>eNqNjs0KwjAQhIMgWNR3WPBcqElb9ezvQVBEvUrUraSkSc2m_ry9EXwAL7t8zM7OtFjEhRjG45TzDusTlUmS8HzEs0xETM8Qa9ihMoV1F6zQeFijdEaZGxzoOyWs7TOeqaCRskZq2JwJ3UP6QLBQ2qOD4IajoiaoU1vVGl8Br2hhKSuErZbv8KrH2oXUhP3f7rLBYr6fruLa2XuD5E-lbVxIoBPPQz2eTLJU_Hf1AVrnSOU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2655320954</pqid></control><display><type>article</type><title>Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing</title><source>Publicly Available Content (ProQuest)</source><creator>Kich, Victor Augusto ; Junior Costa de Jesus ; Grando, Ricardo Bedin ; Kolling, Alisson Henrique ; Heisler, Gabriel Vinícius ; Rodrigo da Silva Guerra</creator><creatorcontrib>Kich, Victor Augusto ; Junior Costa de Jesus ; Grando, Ricardo Bedin ; Kolling, Alisson Henrique ; Heisler, Gabriel Vinícius ; Rodrigo da Silva Guerra</creatorcontrib><description>Deep Reinforcement Learning (DRL) has produced great achievements since it was proposed, including the possibility of processing raw vision input data. However, training an agent to perform tasks based on image feedback remains a challenge. It requires the processing of large amounts of data from high-dimensional observation spaces, frame by frame, and the agent's actions are computed according to deep neural network policies, end-to-end. Image pre-processing is an effective way of reducing these high dimensional spaces, eliminating unnecessary information present in the scene, supporting the extraction of features and their representations in the agent's neural network. Modern video-games are examples of this type of challenge for DRL algorithms because of their visual complexity. In this paper, we propose a low-dimensional observation filter that allows a deep Q-network agent to successfully play in a visually complex and modern video-game, called Neon Drive.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial neural networks ; Complexity ; Computer &amp; video games ; Deep learning ; Feature extraction ; Machine learning ; Neon ; Neural networks ; Visual observation</subject><ispartof>arXiv.org, 2022-04</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2655320954?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>777,781,25734,36993,44571</link.rule.ids></links><search><creatorcontrib>Kich, Victor Augusto</creatorcontrib><creatorcontrib>Junior Costa de Jesus</creatorcontrib><creatorcontrib>Grando, Ricardo Bedin</creatorcontrib><creatorcontrib>Kolling, Alisson Henrique</creatorcontrib><creatorcontrib>Heisler, Gabriel Vinícius</creatorcontrib><creatorcontrib>Rodrigo da Silva Guerra</creatorcontrib><title>Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing</title><title>arXiv.org</title><description>Deep Reinforcement Learning (DRL) has produced great achievements since it was proposed, including the possibility of processing raw vision input data. However, training an agent to perform tasks based on image feedback remains a challenge. It requires the processing of large amounts of data from high-dimensional observation spaces, frame by frame, and the agent's actions are computed according to deep neural network policies, end-to-end. Image pre-processing is an effective way of reducing these high dimensional spaces, eliminating unnecessary information present in the scene, supporting the extraction of features and their representations in the agent's neural network. Modern video-games are examples of this type of challenge for DRL algorithms because of their visual complexity. In this paper, we propose a low-dimensional observation filter that allows a deep Q-network agent to successfully play in a visually complex and modern video-game, called Neon Drive.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Complexity</subject><subject>Computer &amp; video games</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Machine learning</subject><subject>Neon</subject><subject>Neural networks</subject><subject>Visual observation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjs0KwjAQhIMgWNR3WPBcqElb9ezvQVBEvUrUraSkSc2m_ry9EXwAL7t8zM7OtFjEhRjG45TzDusTlUmS8HzEs0xETM8Qa9ihMoV1F6zQeFijdEaZGxzoOyWs7TOeqaCRskZq2JwJ3UP6QLBQ2qOD4IajoiaoU1vVGl8Br2hhKSuErZbv8KrH2oXUhP3f7rLBYr6fruLa2XuD5E-lbVxIoBPPQz2eTLJU_Hf1AVrnSOU</recordid><startdate>20220424</startdate><enddate>20220424</enddate><creator>Kich, Victor Augusto</creator><creator>Junior Costa de Jesus</creator><creator>Grando, Ricardo Bedin</creator><creator>Kolling, Alisson Henrique</creator><creator>Heisler, Gabriel Vinícius</creator><creator>Rodrigo da Silva Guerra</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220424</creationdate><title>Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing</title><author>Kich, Victor Augusto ; Junior Costa de Jesus ; Grando, Ricardo Bedin ; Kolling, Alisson Henrique ; Heisler, Gabriel Vinícius ; Rodrigo da Silva Guerra</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26553209543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Complexity</topic><topic>Computer &amp; video games</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Machine learning</topic><topic>Neon</topic><topic>Neural networks</topic><topic>Visual observation</topic><toplevel>online_resources</toplevel><creatorcontrib>Kich, Victor Augusto</creatorcontrib><creatorcontrib>Junior Costa de Jesus</creatorcontrib><creatorcontrib>Grando, Ricardo Bedin</creatorcontrib><creatorcontrib>Kolling, Alisson Henrique</creatorcontrib><creatorcontrib>Heisler, Gabriel Vinícius</creatorcontrib><creatorcontrib>Rodrigo da Silva Guerra</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kich, Victor Augusto</au><au>Junior Costa de Jesus</au><au>Grando, Ricardo Bedin</au><au>Kolling, Alisson Henrique</au><au>Heisler, Gabriel Vinícius</au><au>Rodrigo da Silva Guerra</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing</atitle><jtitle>arXiv.org</jtitle><date>2022-04-24</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Deep Reinforcement Learning (DRL) has produced great achievements since it was proposed, including the possibility of processing raw vision input data. However, training an agent to perform tasks based on image feedback remains a challenge. It requires the processing of large amounts of data from high-dimensional observation spaces, frame by frame, and the agent's actions are computed according to deep neural network policies, end-to-end. Image pre-processing is an effective way of reducing these high dimensional spaces, eliminating unnecessary information present in the scene, supporting the extraction of features and their representations in the agent's neural network. Modern video-games are examples of this type of challenge for DRL algorithms because of their visual complexity. In this paper, we propose a low-dimensional observation filter that allows a deep Q-network agent to successfully play in a visually complex and modern video-game, called Neon Drive.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2655320954
source Publicly Available Content (ProQuest)
subjects Algorithms
Artificial neural networks
Complexity
Computer & video games
Deep learning
Feature extraction
Machine learning
Neon
Neural networks
Visual observation
title Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T12%3A22%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Deep%20Reinforcement%20Learning%20Using%20a%20Low-Dimensional%20Observation%20Filter%20for%20Visual%20Complex%20Video%20Game%20Playing&rft.jtitle=arXiv.org&rft.au=Kich,%20Victor%20Augusto&rft.date=2022-04-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2655320954%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26553209543%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2655320954&rft_id=info:pmid/&rfr_iscdi=true