Loading…

A Deep Reinforcement Learning Design for Virtual Synchronous Generators Accommodating Modular Multilevel Converters

The deep reinforcement learning (DRL) technique has gained attention for its potential in designing “virtual network” controllers. This skill utilizes a novel solution that can avoid the specific parameters and system model required in classical dynamic programming algorithms. However, addressing th...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences 2023-05, Vol.13 (10), p.5879
Main Authors: Yang, Mu, Wu, Xiaojie, Loveth, Maxwell Chiemeka
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The deep reinforcement learning (DRL) technique has gained attention for its potential in designing “virtual network” controllers. This skill utilizes a novel solution that can avoid the specific parameters and system model required in classical dynamic programming algorithms. However, addressing the issue of system uncertainties and performance deterioration remains a challenge. To overcome this challenge, the authors propose a new control prototype using a twin delayed deep deterministic policy gradient (TD3)-based adaptive controller, which replaces the conventional virtual synchronous generator (VSG) module in the modular multilevel converter (MMC) control. In this approach, an adaptive programming module is developed using a critic fuzzy network point of view to determine the optimal control policy. The modification presented in this framework is able to improve the system stability and resist disruptions while retaining the merits of the conventional VSG control model. The proposed approach is implemented and tested using the DRL toolbox in MATLAB/Simulink.
ISSN:2076-3417
2076-3417
DOI:10.3390/app13105879