Loading…
Reinforcement Learning-Based Net Load Volatility Control of Active Distribution Power Networks
To achieve real-time, effective control of net load variability of active power networks, this paper proposes an online training and control framework based on reinforcement learning (RL). First, the training data are sampled from probability distributions to empower the trained agent with better ge...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | To achieve real-time, effective control of net load variability of active power networks, this paper proposes an online training and control framework based on reinforcement learning (RL). First, the training data are sampled from probability distributions to empower the trained agent with better generalization ability. Second, a close-loop agent training workflow is designed, including dynamic initialization, distributed training, testing, and deployment. The RL agent uses the 4-hour-ahead load forecast as input, allowing real-time decision-making considering the dynamic trends. Moreover, a hierarchical online decision and action delivery framework is adopted, where the actions are weighted split from global to factory-level and device-level. This structure improves the robustness of online operations, and the agent easily adapts to changing conditions. The proposed method was deployed in two distribution grids in Zhejiang Province, China. It operated uninterruptedly in both regions for 30 days, and successfully reduced the daily peak-to-valley difference by 6.82% and 19.23% respectively, which verifies the effectiveness of the proposed method. |
---|---|
ISSN: | 1944-9933 |
DOI: | 10.1109/PESGM51994.2024.10689010 |