Back to Search Start Over

RTA-IR: A runtime assurance framework for behavior planning based on imitation learning and responsibility-sensitive safety model.

Authors :
Peng, Yanfei
Tan, Guozhen
Si, Huaiwei
Source :
Expert Systems with Applications. Dec2023, Vol. 232, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Current research on artificial intelligence (AI) algorithms in safety–critical areas remains extremely challenging due to their inability to be fully verified at design time. In this paper, we propose an RTA-IR architecture, which bypasses the formal verification of the AI algorithm by incorporating runtime assurance (RTA) and provides safety assurances for the AI controllers of complex autonomous vehicles (such as those obtained using neural networks) without excessive performance sacrifice. RTA-IR consists of a high-performance and unproven advanced controller and two verifiable safety controllers and a decision module designed based on the Responsibility Sensitive Safety Model (RSS). The advanced controller is designed based on attention generating adversarial imitation learning(GAIL), which can imitate the efficient policies of experts from a set of expert demonstrations. RSS provides verifiable safety criteria and switching logic for the decision module, and RTA-IR provides safety for autonomous vehicles when the advanced controller produces unsafe control, as well as restoring control of the vehicle by the advanced controller under conditions that confirm safety. We tested and evaluated RTA-IR separately for two levels of traffic density in one driving task. Experiments have shown that RTA-IR exhibits superior performance in terms of both safety and efficiency compared to the baseline method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09574174
Volume :
232
Database :
Academic Search Index
Journal :
Expert Systems with Applications
Publication Type :
Academic Journal
Accession number :
170044677
Full Text :
https://doi.org/10.1016/j.eswa.2023.120824