Back to Search
Start Over
Optimal parallelization strategies for active flow control in deep reinforcement learning-based computational fluid dynamics.
- Source :
-
Physics of Fluids . Apr2024, Vol. 36 Issue 4, p1-15. 15p. - Publication Year :
- 2024
-
Abstract
- Deep reinforcement learning (DRL) has emerged as a promising approach for handling highly dynamic and nonlinear active flow control (AFC) problems. However, the computational cost associated with training DRL models presents a significant performance bottleneck. To address this challenge and enable efficient scaling on high-performance computing architectures, this study focuses on optimizing DRL-based algorithms in parallel settings. We validate an existing state-of-the-art DRL framework used for AFC problems and discuss its efficiency bottlenecks. Subsequently, by deconstructing the overall framework and conducting extensive scalability benchmarks for individual components, we investigate various hybrid parallelization configurations and propose efficient parallelization strategies. Moreover, we refine input/output (I/O) operations in multi-environment DRL training to tackle critical overhead associated with data movement. Finally, we demonstrate the optimized framework for a typical AFC problem where near-linear scaling can be obtained for the overall framework. We achieve a significant boost in parallel efficiency from around 49% to approximately 78%, and the training process is accelerated by approximately 47 times using 60 central processing unit (CPU) cores. These findings are expected to provide valuable insight for further advancements in DRL-based AFC studies. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10706631
- Volume :
- 36
- Issue :
- 4
- Database :
- Academic Search Index
- Journal :
- Physics of Fluids
- Publication Type :
- Academic Journal
- Accession number :
- 177184702
- Full Text :
- https://doi.org/10.1063/5.0204237