Back to Search Start Over

How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy

Authors :
Mao, Zhenjiang
Sobolewski, Carson
Ruchkin, Ivan
Publication Year :
2023

Abstract

End-to-end learning has emerged as a major paradigm for developing autonomous systems. Unfortunately, with its performance and convenience comes an even greater challenge of safety assurance. A key factor of this challenge is the absence of the notion of a low-dimensional and interpretable dynamical state, around which traditional assurance methods revolve. Focusing on the online safety prediction problem, this paper proposes a configurable family of learning pipelines based on generative world models, which do not require low-dimensional states. To implement these pipelines, we overcome the challenges of learning safety-informed latent representations and missing safety labels under prediction-induced distribution shift. These pipelines come with statistical calibration guarantees on their safety chance predictions based on conformal prediction. We perform an extensive evaluation of the proposed learning pipelines on two case studies of image-controlled systems: a racing car and a cartpole.<br />Comment: This is supplementary material to the paper: How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy in 6th Annual Learning for Dynamics & Control Conference (L4DC 2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.12252
Document Type :
Working Paper