Back to Search Start Over

Fill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control.

Authors :
Tabas, Sadegh Sadeghi
Samadi, Vidya
Source :
Journal of Water Resources Planning & Management; Jul2024, Vol. 150 Issue 7, p1-18, 18p
Publication Year :
2024

Abstract

Changes in demand, various hydrological inputs, and environmental stressors are among the issues that reservoir managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy decisions. As the resolution of the analysis increases, it becomes more difficult to effectively represent a real-world system using traditional methods such as dynamic programming and stochastic dynamic programming for determining the best reservoir operation policy. One of the challenges is the "curse of dimensionality," which means the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function. Deep reinforcement learning (DRL) is an intelligent approach to overcome the curses of stochastic optimization problems for reservoir operation policy decisions. To our knowledge, this study is the first attempt that examines various novel DRL continuous-action policy gradient methods, including deep deterministic policy gradients, twin delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) for optimizing reservoir operation policy. In this study, multiple DRL techniques were implemented to find an optimal operation policy for Folsom Reservoir in California. The reservoir system supplies agricultural, municipal, hydropower, and environmental flow demands and flood control operations to the City of Sacramento. Analysis suggests that the TD3 and SAC are robust to meet the Folsom Reservoir's demands and optimize reservoir operation policies. Experiments on continuous-action spaces of reservoir policy decisions demonstrated that the DRL techniques can efficiently learn strategic policies in spaces and can overcome the curse of dimensionality and modeling. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
07339496
Volume :
150
Issue :
7
Database :
Complementary Index
Journal :
Journal of Water Resources Planning & Management
Publication Type :
Academic Journal
Accession number :
177251898
Full Text :
https://doi.org/10.1061/JWRMD5.WRENG-6089