Loading…

Energy Storage Management via Deep Q-Networks

Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation. Motivated by the increase in privately owned storage systems, this paper studies the problem of real-time control of a storage unit co-located with a renewable energy generator and...

Full description

Saved in:
Bibliographic Details
Main Authors: Zamzam, Ahmed S., Yang, Bo, Sidiropoulos, Nicholas D.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 5
container_issue
container_start_page 1
container_title
container_volume
creator Zamzam, Ahmed S.
Yang, Bo
Sidiropoulos, Nicholas D.
description Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation. Motivated by the increase in privately owned storage systems, this paper studies the problem of real-time control of a storage unit co-located with a renewable energy generator and an inelastic load. Unlike many approaches in the literature, no distributional assumptions are being made on the renewable energy generation or the real-time prices. Building on the deep Q-networks algorithm, a reinforcement learning approach utilizing a neural network is devised where the storage unit operational constraints are respected. The neural network approximates the action-value function which dictates what action (charging, discharging, etc.) to take. Simulations indicate that near-optimal performance can be attained with the proposed learning-based control policy for the storage units.
doi_str_mv 10.1109/PESGM40551.2019.8973808
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8973808</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8973808</ieee_id><sourcerecordid>8973808</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-7863c82e2184264193c171d1b9d2a2b8bb51f4962a0808590c8c91f9d16978913</originalsourceid><addsrcrecordid>eNotj91OwkAQRlcTExF5Ai_sCyzuzG53Zy4NVjQBf4Jek207JVUppG0kvL1N5Nycu5PvU-oWzBTA8N1btpovnUlTmKIBnhIHS4bO1BUEJAAmwHM1AnZOM1t7qSZd92UGUhe8x5HSWSPt5pis-l0bN5IsYzNoK02f_NYxeRDZJ-_6RfrDrv3urtVFFX86mZw8Vp-P2cfsSS9e58-z-4Wu0dheB_K2IBQEcugdsC0gQAk5lxgxpzxPoXLsMZphbcqmoIKh4hI8B2KwY3Xz361FZL1v621sj-vTO_sHTz5B0g</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Energy Storage Management via Deep Q-Networks</title><source>IEEE Xplore All Conference Series</source><creator>Zamzam, Ahmed S. ; Yang, Bo ; Sidiropoulos, Nicholas D.</creator><creatorcontrib>Zamzam, Ahmed S. ; Yang, Bo ; Sidiropoulos, Nicholas D.</creatorcontrib><description>Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation. Motivated by the increase in privately owned storage systems, this paper studies the problem of real-time control of a storage unit co-located with a renewable energy generator and an inelastic load. Unlike many approaches in the literature, no distributional assumptions are being made on the renewable energy generation or the real-time prices. Building on the deep Q-networks algorithm, a reinforcement learning approach utilizing a neural network is devised where the storage unit operational constraints are respected. The neural network approximates the action-value function which dictates what action (charging, discharging, etc.) to take. Simulations indicate that near-optimal performance can be attained with the proposed learning-based control policy for the storage units.</description><identifier>EISSN: 1944-9933</identifier><identifier>EISBN: 1728119812</identifier><identifier>EISBN: 9781728119816</identifier><identifier>DOI: 10.1109/PESGM40551.2019.8973808</identifier><language>eng</language><publisher>IEEE</publisher><subject>data-driven control ; deep neural networks ; energy management systems ; Energy storage ; reinforcement learning</subject><ispartof>2019 IEEE Power &amp; Energy Society General Meeting (PESGM), 2019, p.1-5</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8973808$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8973808$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zamzam, Ahmed S.</creatorcontrib><creatorcontrib>Yang, Bo</creatorcontrib><creatorcontrib>Sidiropoulos, Nicholas D.</creatorcontrib><title>Energy Storage Management via Deep Q-Networks</title><title>2019 IEEE Power &amp; Energy Society General Meeting (PESGM)</title><addtitle>PESGM</addtitle><description>Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation. Motivated by the increase in privately owned storage systems, this paper studies the problem of real-time control of a storage unit co-located with a renewable energy generator and an inelastic load. Unlike many approaches in the literature, no distributional assumptions are being made on the renewable energy generation or the real-time prices. Building on the deep Q-networks algorithm, a reinforcement learning approach utilizing a neural network is devised where the storage unit operational constraints are respected. The neural network approximates the action-value function which dictates what action (charging, discharging, etc.) to take. Simulations indicate that near-optimal performance can be attained with the proposed learning-based control policy for the storage units.</description><subject>data-driven control</subject><subject>deep neural networks</subject><subject>energy management systems</subject><subject>Energy storage</subject><subject>reinforcement learning</subject><issn>1944-9933</issn><isbn>1728119812</isbn><isbn>9781728119816</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj91OwkAQRlcTExF5Ai_sCyzuzG53Zy4NVjQBf4Jek207JVUppG0kvL1N5Nycu5PvU-oWzBTA8N1btpovnUlTmKIBnhIHS4bO1BUEJAAmwHM1AnZOM1t7qSZd92UGUhe8x5HSWSPt5pis-l0bN5IsYzNoK02f_NYxeRDZJ-_6RfrDrv3urtVFFX86mZw8Vp-P2cfsSS9e58-z-4Wu0dheB_K2IBQEcugdsC0gQAk5lxgxpzxPoXLsMZphbcqmoIKh4hI8B2KwY3Xz361FZL1v621sj-vTO_sHTz5B0g</recordid><startdate>201908</startdate><enddate>201908</enddate><creator>Zamzam, Ahmed S.</creator><creator>Yang, Bo</creator><creator>Sidiropoulos, Nicholas D.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201908</creationdate><title>Energy Storage Management via Deep Q-Networks</title><author>Zamzam, Ahmed S. ; Yang, Bo ; Sidiropoulos, Nicholas D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-7863c82e2184264193c171d1b9d2a2b8bb51f4962a0808590c8c91f9d16978913</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>data-driven control</topic><topic>deep neural networks</topic><topic>energy management systems</topic><topic>Energy storage</topic><topic>reinforcement learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zamzam, Ahmed S.</creatorcontrib><creatorcontrib>Yang, Bo</creatorcontrib><creatorcontrib>Sidiropoulos, Nicholas D.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zamzam, Ahmed S.</au><au>Yang, Bo</au><au>Sidiropoulos, Nicholas D.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Energy Storage Management via Deep Q-Networks</atitle><btitle>2019 IEEE Power &amp; Energy Society General Meeting (PESGM)</btitle><stitle>PESGM</stitle><date>2019-08</date><risdate>2019</risdate><spage>1</spage><epage>5</epage><pages>1-5</pages><eissn>1944-9933</eissn><eisbn>1728119812</eisbn><eisbn>9781728119816</eisbn><abstract>Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation. Motivated by the increase in privately owned storage systems, this paper studies the problem of real-time control of a storage unit co-located with a renewable energy generator and an inelastic load. Unlike many approaches in the literature, no distributional assumptions are being made on the renewable energy generation or the real-time prices. Building on the deep Q-networks algorithm, a reinforcement learning approach utilizing a neural network is devised where the storage unit operational constraints are respected. The neural network approximates the action-value function which dictates what action (charging, discharging, etc.) to take. Simulations indicate that near-optimal performance can be attained with the proposed learning-based control policy for the storage units.</abstract><pub>IEEE</pub><doi>10.1109/PESGM40551.2019.8973808</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 1944-9933
ispartof 2019 IEEE Power & Energy Society General Meeting (PESGM), 2019, p.1-5
issn 1944-9933
language eng
recordid cdi_ieee_primary_8973808
source IEEE Xplore All Conference Series
subjects data-driven control
deep neural networks
energy management systems
Energy storage
reinforcement learning
title Energy Storage Management via Deep Q-Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T21%3A25%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Energy%20Storage%20Management%20via%20Deep%20Q-Networks&rft.btitle=2019%20IEEE%20Power%20&%20Energy%20Society%20General%20Meeting%20(PESGM)&rft.au=Zamzam,%20Ahmed%20S.&rft.date=2019-08&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.eissn=1944-9933&rft_id=info:doi/10.1109/PESGM40551.2019.8973808&rft.eisbn=1728119812&rft.eisbn_list=9781728119816&rft_dat=%3Cieee_CHZPO%3E8973808%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-7863c82e2184264193c171d1b9d2a2b8bb51f4962a0808590c8c91f9d16978913%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8973808&rfr_iscdi=true