Loading…
Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning
The proliferation of distributed renewable energy resources (RESs) poses major challenges to the operation of microgrids due to uncertainty. Traditional online scheduling approaches relying on accurate forecasts become difficult to implement due to the increase of uncertain RESs. Although several da...
Saved in:
Published in: | Energies (Basel) 2021-04, Vol.14 (8), p.2120 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3 |
---|---|
cites | cdi_FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3 |
container_end_page | |
container_issue | 8 |
container_start_page | 2120 |
container_title | Energies (Basel) |
container_volume | 14 |
creator | Ji, Ying Wang, Jianhui Xu, Jiacan Li, Donglin |
description | The proliferation of distributed renewable energy resources (RESs) poses major challenges to the operation of microgrids due to uncertainty. Traditional online scheduling approaches relying on accurate forecasts become difficult to implement due to the increase of uncertain RESs. Although several data-driven methods have been proposed recently to overcome the challenge, they generally suffer from a scalability issue due to the limited ability to optimize high-dimensional continuous control variables. To address these issues, we propose a data-driven online scheduling method for microgrid energy optimization based on continuous-control deep reinforcement learning (DRL). We formulate the online scheduling problem as a Markov decision process (MDP). The objective is to minimize the operating cost of the microgrid considering the uncertainty of RESs generation, load demand, and electricity prices. To learn the optimal scheduling strategy, a Gated Recurrent Unit (GRU)-based network is designed to extract temporal features of uncertainty and generate the optimal scheduling decisions in an end-to-end manner. To optimize the policy with high-dimensional and continuous actions, proximal policy optimization (PPO) is employed to train the neural network-based policy in a data-driven fashion. The proposed method does not require any forecasting information on the uncertainty or a prior knowledge of the physical model of the microgrid. Simulation results using realistic power system data of California Independent System Operator (CAISO) demonstrate the effectiveness of the proposed method. |
doi_str_mv | 10.3390/en14082120 |
format | article |
fullrecord | <record><control><sourceid>doaj_cross</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_546ccb073b274b0a8689477a9b8f1612</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_546ccb073b274b0a8689477a9b8f1612</doaj_id><sourcerecordid>oai_doaj_org_article_546ccb073b274b0a8689477a9b8f1612</sourcerecordid><originalsourceid>FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3</originalsourceid><addsrcrecordid>eNpNkFtLAzEQhYMoWGpf_AV5FlaTTTaXR22rFioFL-Dbkk0ma0qbLdlV6L83WlHnZYYD8x3OQeickkvGNLmCSDlRJS3JERpRrUVBiWTH_-5TNOn7NcnDGGWMjdDrzAymmKXwARGv4iZEwPMIqd3jJ_sG7j0rLe48Nvgh2NS1KTh8Y3pwuIt4BrDDjxCi75KFLcQBL8GkmH_O0Ik3mx4mP3uMXm7nz9P7Yrm6W0yvl4XN_kMhnJTGQMkIF5Zx7T0jznPjuHQVk7YSzKtSUa8kcFCNZkKqprLSamKF9GyMFgeu68y63qWwNWlfdybU30KX2tqkIdgN1FW2sE2uoSklb4hRQmme7XWjPBW0zKyLAysH7fsE_pdHSf1Vcf1XMfsE31Jsng</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Ji, Ying ; Wang, Jianhui ; Xu, Jiacan ; Li, Donglin</creator><creatorcontrib>Ji, Ying ; Wang, Jianhui ; Xu, Jiacan ; Li, Donglin</creatorcontrib><description>The proliferation of distributed renewable energy resources (RESs) poses major challenges to the operation of microgrids due to uncertainty. Traditional online scheduling approaches relying on accurate forecasts become difficult to implement due to the increase of uncertain RESs. Although several data-driven methods have been proposed recently to overcome the challenge, they generally suffer from a scalability issue due to the limited ability to optimize high-dimensional continuous control variables. To address these issues, we propose a data-driven online scheduling method for microgrid energy optimization based on continuous-control deep reinforcement learning (DRL). We formulate the online scheduling problem as a Markov decision process (MDP). The objective is to minimize the operating cost of the microgrid considering the uncertainty of RESs generation, load demand, and electricity prices. To learn the optimal scheduling strategy, a Gated Recurrent Unit (GRU)-based network is designed to extract temporal features of uncertainty and generate the optimal scheduling decisions in an end-to-end manner. To optimize the policy with high-dimensional and continuous actions, proximal policy optimization (PPO) is employed to train the neural network-based policy in a data-driven fashion. The proposed method does not require any forecasting information on the uncertainty or a prior knowledge of the physical model of the microgrid. Simulation results using realistic power system data of California Independent System Operator (CAISO) demonstrate the effectiveness of the proposed method.</description><identifier>ISSN: 1996-1073</identifier><identifier>EISSN: 1996-1073</identifier><identifier>DOI: 10.3390/en14082120</identifier><language>eng</language><publisher>MDPI AG</publisher><subject>data driven modeling ; microgrid energy management ; proximal policy optimization ; recurrent neural network</subject><ispartof>Energies (Basel), 2021-04, Vol.14 (8), p.2120</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3</citedby><cites>FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3</cites><orcidid>0000-0001-5509-1198</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Ji, Ying</creatorcontrib><creatorcontrib>Wang, Jianhui</creatorcontrib><creatorcontrib>Xu, Jiacan</creatorcontrib><creatorcontrib>Li, Donglin</creatorcontrib><title>Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning</title><title>Energies (Basel)</title><description>The proliferation of distributed renewable energy resources (RESs) poses major challenges to the operation of microgrids due to uncertainty. Traditional online scheduling approaches relying on accurate forecasts become difficult to implement due to the increase of uncertain RESs. Although several data-driven methods have been proposed recently to overcome the challenge, they generally suffer from a scalability issue due to the limited ability to optimize high-dimensional continuous control variables. To address these issues, we propose a data-driven online scheduling method for microgrid energy optimization based on continuous-control deep reinforcement learning (DRL). We formulate the online scheduling problem as a Markov decision process (MDP). The objective is to minimize the operating cost of the microgrid considering the uncertainty of RESs generation, load demand, and electricity prices. To learn the optimal scheduling strategy, a Gated Recurrent Unit (GRU)-based network is designed to extract temporal features of uncertainty and generate the optimal scheduling decisions in an end-to-end manner. To optimize the policy with high-dimensional and continuous actions, proximal policy optimization (PPO) is employed to train the neural network-based policy in a data-driven fashion. The proposed method does not require any forecasting information on the uncertainty or a prior knowledge of the physical model of the microgrid. Simulation results using realistic power system data of California Independent System Operator (CAISO) demonstrate the effectiveness of the proposed method.</description><subject>data driven modeling</subject><subject>microgrid energy management</subject><subject>proximal policy optimization</subject><subject>recurrent neural network</subject><issn>1996-1073</issn><issn>1996-1073</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpNkFtLAzEQhYMoWGpf_AV5FlaTTTaXR22rFioFL-Dbkk0ma0qbLdlV6L83WlHnZYYD8x3OQeickkvGNLmCSDlRJS3JERpRrUVBiWTH_-5TNOn7NcnDGGWMjdDrzAymmKXwARGv4iZEwPMIqd3jJ_sG7j0rLe48Nvgh2NS1KTh8Y3pwuIt4BrDDjxCi75KFLcQBL8GkmH_O0Ik3mx4mP3uMXm7nz9P7Yrm6W0yvl4XN_kMhnJTGQMkIF5Zx7T0jznPjuHQVk7YSzKtSUa8kcFCNZkKqprLSamKF9GyMFgeu68y63qWwNWlfdybU30KX2tqkIdgN1FW2sE2uoSklb4hRQmme7XWjPBW0zKyLAysH7fsE_pdHSf1Vcf1XMfsE31Jsng</recordid><startdate>20210401</startdate><enddate>20210401</enddate><creator>Ji, Ying</creator><creator>Wang, Jianhui</creator><creator>Xu, Jiacan</creator><creator>Li, Donglin</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-5509-1198</orcidid></search><sort><creationdate>20210401</creationdate><title>Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning</title><author>Ji, Ying ; Wang, Jianhui ; Xu, Jiacan ; Li, Donglin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>data driven modeling</topic><topic>microgrid energy management</topic><topic>proximal policy optimization</topic><topic>recurrent neural network</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ji, Ying</creatorcontrib><creatorcontrib>Wang, Jianhui</creatorcontrib><creatorcontrib>Xu, Jiacan</creatorcontrib><creatorcontrib>Li, Donglin</creatorcontrib><collection>CrossRef</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Energies (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ji, Ying</au><au>Wang, Jianhui</au><au>Xu, Jiacan</au><au>Li, Donglin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning</atitle><jtitle>Energies (Basel)</jtitle><date>2021-04-01</date><risdate>2021</risdate><volume>14</volume><issue>8</issue><spage>2120</spage><pages>2120-</pages><issn>1996-1073</issn><eissn>1996-1073</eissn><abstract>The proliferation of distributed renewable energy resources (RESs) poses major challenges to the operation of microgrids due to uncertainty. Traditional online scheduling approaches relying on accurate forecasts become difficult to implement due to the increase of uncertain RESs. Although several data-driven methods have been proposed recently to overcome the challenge, they generally suffer from a scalability issue due to the limited ability to optimize high-dimensional continuous control variables. To address these issues, we propose a data-driven online scheduling method for microgrid energy optimization based on continuous-control deep reinforcement learning (DRL). We formulate the online scheduling problem as a Markov decision process (MDP). The objective is to minimize the operating cost of the microgrid considering the uncertainty of RESs generation, load demand, and electricity prices. To learn the optimal scheduling strategy, a Gated Recurrent Unit (GRU)-based network is designed to extract temporal features of uncertainty and generate the optimal scheduling decisions in an end-to-end manner. To optimize the policy with high-dimensional and continuous actions, proximal policy optimization (PPO) is employed to train the neural network-based policy in a data-driven fashion. The proposed method does not require any forecasting information on the uncertainty or a prior knowledge of the physical model of the microgrid. Simulation results using realistic power system data of California Independent System Operator (CAISO) demonstrate the effectiveness of the proposed method.</abstract><pub>MDPI AG</pub><doi>10.3390/en14082120</doi><orcidid>https://orcid.org/0000-0001-5509-1198</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1996-1073 |
ispartof | Energies (Basel), 2021-04, Vol.14 (8), p.2120 |
issn | 1996-1073 1996-1073 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_546ccb073b274b0a8689477a9b8f1612 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | data driven modeling microgrid energy management proximal policy optimization recurrent neural network |
title | Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T12%3A49%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-doaj_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Data-Driven%20Online%20Energy%20Scheduling%20of%20a%20Microgrid%20Based%20on%20Deep%20Reinforcement%20Learning&rft.jtitle=Energies%20(Basel)&rft.au=Ji,%20Ying&rft.date=2021-04-01&rft.volume=14&rft.issue=8&rft.spage=2120&rft.pages=2120-&rft.issn=1996-1073&rft.eissn=1996-1073&rft_id=info:doi/10.3390/en14082120&rft_dat=%3Cdoaj_cross%3Eoai_doaj_org_article_546ccb073b274b0a8689477a9b8f1612%3C/doaj_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c333t-6d77aae23046c349ff30df4ad47d537c563f8281f87e4e8b93678b5c7c90c67f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |