Back to Search Start Over

Model-predictive control and reinforcement learning in multi-energy system case studies.

Authors :
Ceusters, Glenn
Rodríguez, Román Cantú
García, Alberte Bouso
Franke, Rüdiger
Deconinck, Geert
Helsen, Lieve
Nowé, Ann
Messagie, Maarten
Camargo, Luis Ramirez
Source :
Applied Energy. Dec2021, Vol. 303, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

Model predictive control (MPC) offers an optimal control technique to establish and ensure that the total operation cost of multi-energy systems remains at a minimum while fulfilling all system constraints. However, this method presumes an adequate model of the underlying system dynamics, which is prone to modelling errors and is not necessarily adaptive. This has an associated initial and ongoing project-specific engineering cost. In this paper, we present an on- and off-policy multi-objective reinforcement learning (RL) approach that does not assume a model a priori , benchmarking this against a linear MPC (LMPC — to reflect current practice, though non-linear MPC performs better) - both derived from the general optimal control problem, highlighting their differences and similarities. In a simple multi-energy system (MES) configuration case study, we show that a twin delayed deep deterministic policy gradient (TD3) RL agent offers the potential to match and outperform the perfect foresight LMPC benchmark (101.5%). This while the realistic LMPC, i.e. imperfect predictions, only achieves 98%. While in a more complex MES system configuration, the RL agent's performance is generally lower (94.6%), yet still better than the realistic LMPC (88.9%). In both case studies, the RL agents outperformed the realistic LMPC after a training period of 2 years using quarterly interactions with the environment. We conclude that reinforcement learning is a viable optimal control technique for multi-energy systems given adequate constraint handling and pre-training, to avoid unsafe interactions and long training periods, as is proposed in fundamental future work. • An integrated control strategy enables the efficient use of energy at lower costs. • Model-predictive control and reinforcement learning have common mathematical ground. • Reinforcement learning-based energy management does not require a priori models. • Reinforcement learning can outperform model-predictive control after training. • Safety and fast convergence remain as challenges in reinforcement learning. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
03062619
Volume :
303
Database :
Academic Search Index
Journal :
Applied Energy
Publication Type :
Academic Journal
Accession number :
152649261
Full Text :
https://doi.org/10.1016/j.apenergy.2021.117634