Loading…

Development of a Soft Actor Critic deep reinforcement learning approach for harnessing energy flexibility in a Large Office building

This research is concerned with the novel application and investigation of ‘Soft Actor Critic’ based deep reinforcement learning to control the cooling setpoint (and hence cooling loads) of a large commercial building to harness energy flexibility. The research is motivated by the challenge associat...

Full description

Saved in:
Bibliographic Details
Published in:Energy and AI 2021-09, Vol.5, p.100101, Article 100101
Main Authors: Kathirgamanathan, Anjukan, Mangina, Eleni, Finn, Donal P.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This research is concerned with the novel application and investigation of ‘Soft Actor Critic’ based deep reinforcement learning to control the cooling setpoint (and hence cooling loads) of a large commercial building to harness energy flexibility. The research is motivated by the challenge associated with the development and application of conventional model-based control approaches at scale to the wider building stock. Soft Actor Critic is a model-free deep reinforcement learning technique that is able to handle continuous action spaces and which has seen limited application to real-life or high-fidelity simulation implementations in the context of automated and intelligent control of building energy systems. Such control techniques are seen as one possible solution to supporting the operation of a smart, sustainable and future electrical grid. This research tests the suitability of the technique through training and deployment of the agent on an EnergyPlus based environment of the office building. The agent was found to learn an optimal control policy that was able to minimise energy costs by 9.7% compared to the default rule-based control scheme and was able to improve or maintain thermal comfort limits over a test period of one week. The algorithm was shown to be robust to the different hyperparameters and this optimal control policy was learnt through the use of a minimal state space consisting of readily available variables. The robustness of the algorithm was tested through investigation of the speed of learning and ability to deploy to different seasons and climates. It was found that the agent requires minimal training sample points and outperforms the baseline after three months of operation and also without disruption to thermal comfort during this period. The agent is transferable to other climates and seasons although further retraining or hyperparameter tuning is recommended. •A novel application of Soft Actor Critic Deep Reinforcement Learning.•Controller harnesses energy flexibility from building passive thermal mass.•A novel investigation into robustness of hyperparameters and state space design.•Agent able to achieve cost savings of 9.7% compared to baseline control.•Minimal training is required without the need for disruptive excitation.
ISSN:2666-5468
2666-5468
DOI:10.1016/j.egyai.2021.100101