Back to Search Start Over

Safety cases for frontier AI

Authors :
Buhl, Marie Davidsen
Sett, Gaurav
Koessler, Leonie
Schuett, Jonas
Anderljung, Markus
Publication Year :
2024

Abstract

As frontier artificial intelligence (AI) systems become more capable, it becomes more important that developers can explain why their systems are sufficiently safe. One way to do so is via safety cases: reports that make a structured argument, supported by evidence, that a system is safe enough in a given operational context. Safety cases are already common in other safety-critical industries such as aviation and nuclear power. In this paper, we explain why they may also be a useful tool in frontier AI governance, both in industry self-regulation and government regulation. We then discuss the practicalities of safety cases, outlining how to produce a frontier AI safety case and discussing what still needs to happen before safety cases can substantially inform decisions.<br />Comment: 25 pages, 6 figures, 5 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.21572
Document Type :
Working Paper