Back to Search
Start Over
Risk Directed Importance Sampling in Stochastic Dual Dynamic Programming with Hidden Markov Models for Grid Level Energy Storage
- Publication Year :
- 2020
-
Abstract
- Power systems that need to integrate renewables at a large scale must account for the high levels of uncertainty introduced by these power sources. This can be accomplished with a system of many distributed grid-level storage devices. However, developing a cost-effective and robust control policy in this setting is a challenge due to the high dimensionality of the resource state and the highly volatile stochastic processes involved. We first model the problem using a carefully calibrated power grid model and a specialized hidden Markov stochastic model for wind power which replicates crossing times. We then base our control policy on a variant of stochastic dual dynamic programming, an algorithm well suited for certain high dimensional control problems, that is modified to accommodate hidden Markov uncertainty in the stochastics. However, the algorithm may be impractical to use as it exhibits relatively slow convergence. To accelerate the algorithm, we apply both quadratic regularization and a risk-directed importance sampling technique for sampling the outcome space at each time step in the backward pass of the algorithm. We show that the resulting policies are more robust than those developed using classical SDDP modeling assumptions and algorithms.<br />Comment: 42 pages, 7 figures, Replacement: added additional references
- Subjects :
- Mathematics - Optimization and Control
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2001.06026
- Document Type :
- Working Paper