Back to Search Start Over

On Separation Between Learning and Control in Partially Observed Markov Decision Processes

Authors :
Malikopoulos, Andreas A.
Publication Year :
2022

Abstract

Cyber-physical systems (CPS) encounter a large volume of data which is added to the system gradually in real time and not altogether in advance. As the volume of data increases, the domain of the control strategies also increases, and thus it becomes challenging to search for an optimal strategy. Even if an optimal control strategy is found, implementing such strategies with increasing domains is burdensome. To derive an optimal control strategy in CPS, we typically assume an ideal model of the system. Such model-based control approaches cannot effectively facilitate optimal solutions with performance guarantees due to the discrepancy between the model and the actual CPS. Alternatively, traditional supervised learning approaches cannot always facilitate robust solutions using data derived offline. Similarly, applying reinforcement learning approaches directly to the actual CPS might impose significant implications on safety and robust operation of the system. The goal of this chapter is to provide a theoretical framework that aims at separating the control and learning tasks which allows us to combine offline model-based control with online learning approaches, and thus circumvent the challenges in deriving optimal control strategies for CPS.<br />Comment: 18 pages, 5 figures. arXiv admin note: text overlap with arXiv:2101.10992

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.14972
Document Type :
Working Paper