Back to Search Start Over

Reluplex: a calculus for reasoning about deep neural networks.

Authors :
Katz, Guy
Barrett, Clark
Dill, David L.
Julian, Kyle
Kochenderfer, Mykel J.
Source :
Formal Methods in System Design; Feb2022, Vol. 60 Issue 1, p87-116, 30p
Publication Year :
2022

Abstract

Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks that could be verified previously. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09259856
Volume :
60
Issue :
1
Database :
Complementary Index
Journal :
Formal Methods in System Design
Publication Type :
Academic Journal
Accession number :
161769433
Full Text :
https://doi.org/10.1007/s10703-021-00363-7