Back to Search Start Over

Learning and Fast Adaptation for Grid Emergency Control via Deep Meta Reinforcement Learning.

Authors :
Huang, Renke
Chen, Yujiao
Yin, Tianzhixi
Huang, Qiuhua
Tan, Jie
Yu, Wenhao
Li, Xinya
Li, Ang
Du, Yan
Source :
IEEE Transactions on Power Systems. Nov2022, Vol. 37 Issue 6, p4168-4178. 11p.
Publication Year :
2022

Abstract

As power systems are undergoing a significant transformation with more uncertainties, less inertia and closer to operation limits, there is increasing risk of large outages. Thus, there is an imperative need to enhance grid emergency control to maintain system reliability and security. Towards this end, great progress has been made in developing deep reinforcement learning (DRL) based grid control solutions in recent years. However, existing DRL-based solutions have two main limitations: 1) they cannot handle well with a wide range of grid operation conditions, system parameters, and contingencies; 2) they generally lack the ability to fast adapt to new grid operation conditions, system parameters, and contingencies, limiting their applicability for real-world applications. In this paper, we mitigate these limitations by developing a novel deep meta-reinforcement learning (DMRL) algorithm. The DMRL combines the meta strategy optimization together with DRL, and trains policies modulated by a latent space that can quickly adapt to new scenarios. We test the developed DMRL algorithm on the IEEE 300-bus system. We demonstrate fast adaptation of the meta-trained DRL polices with latent variables to new operating conditions and scenarios using the proposed method, which achieves superior performance compared to the state-of-the-art DRL and model predictive control (MPC) methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08858950
Volume :
37
Issue :
6
Database :
Academic Search Index
Journal :
IEEE Transactions on Power Systems
Publication Type :
Academic Journal
Accession number :
160691944
Full Text :
https://doi.org/10.1109/TPWRS.2022.3155117