Loading…

Solving dynamic distribution network reconfiguration using deep reinforcement learning

Distribution network reconfiguration, as a part of the distribution management system, plays an important role in increasing the energy efficiency of the distribution network by coordinating the operations of the switches in the distribution network. Dynamic distribution network reconfiguration (DDN...

Full description

Saved in:
Bibliographic Details
Published in:Electrical engineering 2022, Vol.104 (3), p.1487-1501
Main Authors: Kundačina, Ognjen B., Vidović, Predrag M., Petković, Milan R.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203
cites cdi_FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203
container_end_page 1501
container_issue 3
container_start_page 1487
container_title Electrical engineering
container_volume 104
creator Kundačina, Ognjen B.
Vidović, Predrag M.
Petković, Milan R.
description Distribution network reconfiguration, as a part of the distribution management system, plays an important role in increasing the energy efficiency of the distribution network by coordinating the operations of the switches in the distribution network. Dynamic distribution network reconfiguration (DDNR), enabled by the sufficient number of remote switching devices in the distribution network, attempts to find the optimal topologies of the distribution network over the specified time interval. This paper proposes data-driven DDNR based on deep reinforcement learning (DRL). DRL-based DDNR controller aims to minimize the objective function, i.e. active energy losses and the cost of switching manipulations while satisfying the constraints. The following constraints are considered: allowed bus voltages, allowed line apparent powers, a radial network configuration with all buses being supplied, and the maximal allowed number of switching operations. This optimization problem is modelled as a Markov decision process by defining the possible states and actions of the DDNR agent (controller) and rewards that lead the agent to minimize the objective function while satisfying the constraints. Switching operation constraints are modelled by modifying the action space definition instead of including the additional penalty term in the reward function, to increase the computational efficiency. The proposed algorithm was tested on three test examples: small benchmark network, real-life large-scale test system and IEEE 33-bus radial system, and the results confirmed the robustness and scalability of the proposed algorithm.
doi_str_mv 10.1007/s00202-021-01399-y
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2665134681</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2665134681</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOD7-gKuC6-q9Sdo0Sxl8wYALH9uQpumQcSYZk1aZf2-cCu5cXTjnfOfCIeQC4QoBxHUCoEBLoFgCMinL3QGZIWdZ4o04JDOQvCmFpHhMTlJaAQCrJJ-Rt-ew_nR-WXQ7rzfOFJ1LQ3TtOLjgC2-HrxDfi2hN8L1bjlHv9THtEWu32XK-D9HYjfVDsbY6-uydkaNer5M9_72n5PXu9mX-UC6e7h_nN4vSMJRD2fW8gY51XKBtOG9YJyvRYouoq8pQ3QhDu962TFRGAGItJTM1UM21qTUFdkoup95tDB-jTYNahTH6_FLRuq6Q8brBnKJTysSQUrS92ka30XGnENTPfmraT-X91H4_tcsQm6CUw35p41_1P9Q3Lz10_g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2665134681</pqid></control><display><type>article</type><title>Solving dynamic distribution network reconfiguration using deep reinforcement learning</title><source>Springer Nature</source><creator>Kundačina, Ognjen B. ; Vidović, Predrag M. ; Petković, Milan R.</creator><creatorcontrib>Kundačina, Ognjen B. ; Vidović, Predrag M. ; Petković, Milan R.</creatorcontrib><description>Distribution network reconfiguration, as a part of the distribution management system, plays an important role in increasing the energy efficiency of the distribution network by coordinating the operations of the switches in the distribution network. Dynamic distribution network reconfiguration (DDNR), enabled by the sufficient number of remote switching devices in the distribution network, attempts to find the optimal topologies of the distribution network over the specified time interval. This paper proposes data-driven DDNR based on deep reinforcement learning (DRL). DRL-based DDNR controller aims to minimize the objective function, i.e. active energy losses and the cost of switching manipulations while satisfying the constraints. The following constraints are considered: allowed bus voltages, allowed line apparent powers, a radial network configuration with all buses being supplied, and the maximal allowed number of switching operations. This optimization problem is modelled as a Markov decision process by defining the possible states and actions of the DDNR agent (controller) and rewards that lead the agent to minimize the objective function while satisfying the constraints. Switching operation constraints are modelled by modifying the action space definition instead of including the additional penalty term in the reward function, to increase the computational efficiency. The proposed algorithm was tested on three test examples: small benchmark network, real-life large-scale test system and IEEE 33-bus radial system, and the results confirmed the robustness and scalability of the proposed algorithm.</description><identifier>ISSN: 0948-7921</identifier><identifier>EISSN: 1432-0487</identifier><identifier>DOI: 10.1007/s00202-021-01399-y</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Constraint modelling ; Controllers ; Deep learning ; Distribution management ; Economics and Management ; Electrical Engineering ; Electrical Machines and Networks ; Energy distribution ; Energy Policy ; Engineering ; Machine learning ; Markov processes ; Optimization ; Original Paper ; Power Electronics ; Reconfiguration ; Switches ; Switching ; Topology</subject><ispartof>Electrical engineering, 2022, Vol.104 (3), p.1487-1501</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203</citedby><cites>FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Kundačina, Ognjen B.</creatorcontrib><creatorcontrib>Vidović, Predrag M.</creatorcontrib><creatorcontrib>Petković, Milan R.</creatorcontrib><title>Solving dynamic distribution network reconfiguration using deep reinforcement learning</title><title>Electrical engineering</title><addtitle>Electr Eng</addtitle><description>Distribution network reconfiguration, as a part of the distribution management system, plays an important role in increasing the energy efficiency of the distribution network by coordinating the operations of the switches in the distribution network. Dynamic distribution network reconfiguration (DDNR), enabled by the sufficient number of remote switching devices in the distribution network, attempts to find the optimal topologies of the distribution network over the specified time interval. This paper proposes data-driven DDNR based on deep reinforcement learning (DRL). DRL-based DDNR controller aims to minimize the objective function, i.e. active energy losses and the cost of switching manipulations while satisfying the constraints. The following constraints are considered: allowed bus voltages, allowed line apparent powers, a radial network configuration with all buses being supplied, and the maximal allowed number of switching operations. This optimization problem is modelled as a Markov decision process by defining the possible states and actions of the DDNR agent (controller) and rewards that lead the agent to minimize the objective function while satisfying the constraints. Switching operation constraints are modelled by modifying the action space definition instead of including the additional penalty term in the reward function, to increase the computational efficiency. The proposed algorithm was tested on three test examples: small benchmark network, real-life large-scale test system and IEEE 33-bus radial system, and the results confirmed the robustness and scalability of the proposed algorithm.</description><subject>Algorithms</subject><subject>Constraint modelling</subject><subject>Controllers</subject><subject>Deep learning</subject><subject>Distribution management</subject><subject>Economics and Management</subject><subject>Electrical Engineering</subject><subject>Electrical Machines and Networks</subject><subject>Energy distribution</subject><subject>Energy Policy</subject><subject>Engineering</subject><subject>Machine learning</subject><subject>Markov processes</subject><subject>Optimization</subject><subject>Original Paper</subject><subject>Power Electronics</subject><subject>Reconfiguration</subject><subject>Switches</subject><subject>Switching</subject><subject>Topology</subject><issn>0948-7921</issn><issn>1432-0487</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLxDAUhYMoOD7-gKuC6-q9Sdo0Sxl8wYALH9uQpumQcSYZk1aZf2-cCu5cXTjnfOfCIeQC4QoBxHUCoEBLoFgCMinL3QGZIWdZ4o04JDOQvCmFpHhMTlJaAQCrJJ-Rt-ew_nR-WXQ7rzfOFJ1LQ3TtOLjgC2-HrxDfi2hN8L1bjlHv9THtEWu32XK-D9HYjfVDsbY6-uydkaNer5M9_72n5PXu9mX-UC6e7h_nN4vSMJRD2fW8gY51XKBtOG9YJyvRYouoq8pQ3QhDu962TFRGAGItJTM1UM21qTUFdkoup95tDB-jTYNahTH6_FLRuq6Q8brBnKJTysSQUrS92ka30XGnENTPfmraT-X91H4_tcsQm6CUw35p41_1P9Q3Lz10_g</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Kundačina, Ognjen B.</creator><creator>Vidović, Predrag M.</creator><creator>Petković, Milan R.</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>2022</creationdate><title>Solving dynamic distribution network reconfiguration using deep reinforcement learning</title><author>Kundačina, Ognjen B. ; Vidović, Predrag M. ; Petković, Milan R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Constraint modelling</topic><topic>Controllers</topic><topic>Deep learning</topic><topic>Distribution management</topic><topic>Economics and Management</topic><topic>Electrical Engineering</topic><topic>Electrical Machines and Networks</topic><topic>Energy distribution</topic><topic>Energy Policy</topic><topic>Engineering</topic><topic>Machine learning</topic><topic>Markov processes</topic><topic>Optimization</topic><topic>Original Paper</topic><topic>Power Electronics</topic><topic>Reconfiguration</topic><topic>Switches</topic><topic>Switching</topic><topic>Topology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kundačina, Ognjen B.</creatorcontrib><creatorcontrib>Vidović, Predrag M.</creatorcontrib><creatorcontrib>Petković, Milan R.</creatorcontrib><collection>CrossRef</collection><jtitle>Electrical engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kundačina, Ognjen B.</au><au>Vidović, Predrag M.</au><au>Petković, Milan R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Solving dynamic distribution network reconfiguration using deep reinforcement learning</atitle><jtitle>Electrical engineering</jtitle><stitle>Electr Eng</stitle><date>2022</date><risdate>2022</risdate><volume>104</volume><issue>3</issue><spage>1487</spage><epage>1501</epage><pages>1487-1501</pages><issn>0948-7921</issn><eissn>1432-0487</eissn><abstract>Distribution network reconfiguration, as a part of the distribution management system, plays an important role in increasing the energy efficiency of the distribution network by coordinating the operations of the switches in the distribution network. Dynamic distribution network reconfiguration (DDNR), enabled by the sufficient number of remote switching devices in the distribution network, attempts to find the optimal topologies of the distribution network over the specified time interval. This paper proposes data-driven DDNR based on deep reinforcement learning (DRL). DRL-based DDNR controller aims to minimize the objective function, i.e. active energy losses and the cost of switching manipulations while satisfying the constraints. The following constraints are considered: allowed bus voltages, allowed line apparent powers, a radial network configuration with all buses being supplied, and the maximal allowed number of switching operations. This optimization problem is modelled as a Markov decision process by defining the possible states and actions of the DDNR agent (controller) and rewards that lead the agent to minimize the objective function while satisfying the constraints. Switching operation constraints are modelled by modifying the action space definition instead of including the additional penalty term in the reward function, to increase the computational efficiency. The proposed algorithm was tested on three test examples: small benchmark network, real-life large-scale test system and IEEE 33-bus radial system, and the results confirmed the robustness and scalability of the proposed algorithm.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00202-021-01399-y</doi><tpages>15</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0948-7921
ispartof Electrical engineering, 2022, Vol.104 (3), p.1487-1501
issn 0948-7921
1432-0487
language eng
recordid cdi_proquest_journals_2665134681
source Springer Nature
subjects Algorithms
Constraint modelling
Controllers
Deep learning
Distribution management
Economics and Management
Electrical Engineering
Electrical Machines and Networks
Energy distribution
Energy Policy
Engineering
Machine learning
Markov processes
Optimization
Original Paper
Power Electronics
Reconfiguration
Switches
Switching
Topology
title Solving dynamic distribution network reconfiguration using deep reinforcement learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T01%3A05%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Solving%20dynamic%20distribution%20network%20reconfiguration%20using%20deep%20reinforcement%20learning&rft.jtitle=Electrical%20engineering&rft.au=Kunda%C4%8Dina,%20Ognjen%20B.&rft.date=2022&rft.volume=104&rft.issue=3&rft.spage=1487&rft.epage=1501&rft.pages=1487-1501&rft.issn=0948-7921&rft.eissn=1432-0487&rft_id=info:doi/10.1007/s00202-021-01399-y&rft_dat=%3Cproquest_cross%3E2665134681%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-df480d3d471e84483d957b1b11a55c2a87c2dfeb375c70116993c602a4ac6a203%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2665134681&rft_id=info:pmid/&rfr_iscdi=true