Back to Search Start Over

A practically implementable reinforcement learning‐based process controller design.

Authors :
Hassanpour, Hesam
Wang, Xiaonian
Corbett, Brandon
Mhaskar, Prashant
Source :
AIChE Journal; Jan2024, Vol. 70 Issue 1, p1-15, 15p
Publication Year :
2024

Abstract

The present article enables reinforcement learning (RL)‐based controllers for process control applications. Existing instances of RL‐based solutions have significant challenges for online implementation since the training process of an RL agent (controller) presently requires practically impossible number of online interactions between the agent and the environment (process). To address this challenge, we propose an implementable model‐free RL method developed by leveraging industrially implemented model predictive control (MPC) calculations (often designed using a simple linear model identified via step tests). In the first step, MPC calculations are used to pretrain an RL agent that can mimic the MPC performance. Specifically, the MPC calculations are used to pretrain the actor, and the objective function is used to pretrain the critic(s). The pretrained RL agent is then employed within a model‐free RL framework to control the process in a way that initially imitates MPC behavior (thus not compromising process performance and safety), but also continuously learns and improve its performance over the nominal linear MPC. The effectiveness of the proposed approach is illustrated through simulations on a chemical reactor example. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00011541
Volume :
70
Issue :
1
Database :
Complementary Index
Journal :
AIChE Journal
Publication Type :
Academic Journal
Accession number :
174325790
Full Text :
https://doi.org/10.1002/aic.18245