Back to Search Start Over

Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes.

Authors :
Varela, Pau
Suárez, Pol
Alcántara-Ávila, Francisco
Miró, Arnau
Rabault, Jean
Font, Bernat
García-Cuevas, Luis Miguel
Lehmkuhl, Oriol
Vinuesa, Ricardo
Source :
Actuators; Dec2022, Vol. 11 Issue 12, p359, 24p
Publication Year :
2022

Abstract

The increase in emissions associated with aviation requires deeper research into novel sensing and flow-control strategies to obtain improved aerodynamic performances. In this context, data-driven methods are suitable for exploring new approaches to control the flow and develop more efficient strategies. Deep artificial neural networks (ANNs) used together with reinforcement learning, i.e., deep reinforcement learning (DRL), are receiving more attention due to their capabilities of controlling complex problems in multiple areas. In particular, these techniques have been recently used to solve problems related to flow control. In this work, an ANN trained through a DRL agent, coupled with the numerical solver Alya, is used to perform active flow control. The Tensorforce library was used to apply DRL to the simulated flow. Two-dimensional simulations of the flow around a cylinder were conducted and an active control based on two jets located on the walls of the cylinder was considered. By gathering information from the flow surrounding the cylinder, the ANN agent is able to learn through proximal policy optimization (PPO) effective control strategies for the jets, leading to a significant drag reduction. Furthermore, the agent needs to account for the coupled effects of the friction- and pressure-drag components, as well as the interaction between the two boundary layers on both sides of the cylinder and the wake. In the present work, a Reynolds number range beyond those previously considered was studied and compared with results obtained using classical flow-control methods. Significantly different forms of nature in the control strategies were identified by the DRL as the Reynolds number R e increased. On the one hand, for R e ≤ 1000 , the classical control strategy based on an opposition control relative to the wake oscillation was obtained. On the other hand, for R e = 2000 , the new strategy consisted of energization of the boundary layers and the separation area, which modulated the flow separation and reduced the drag in a fashion similar to that of the drag crisis, through a high-frequency actuation. A cross-application of agents was performed for a flow at R e = 2000 , obtaining similar results in terms of the drag reduction with the agents trained at R e = 1000 and 2000. The fact that two different strategies yielded the same performance made us question whether this Reynolds number regime ( R e = 2000 ) belongs to a transition towards a nature-different flow, which would only admits a high-frequency actuation strategy to obtain the drag reduction. At the same time, this finding allows for the application of ANNs trained at lower Reynolds numbers, but are comparable in nature, saving computational resources. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20760825
Volume :
11
Issue :
12
Database :
Complementary Index
Journal :
Actuators
Publication Type :
Academic Journal
Accession number :
160942384
Full Text :
https://doi.org/10.3390/act11120359