EcoSta 2022: Start Registration
View Submission - EcoSta2022
A0492
Title: How to trust a black-box: Formal verification of deep neural networks Authors:  Huan Zhang - CMU (United States) [presenting]
Abstract: Neural networks have become a crucial element in modern artificial intelligence. However, they are often black-boxes and can behave unexpectedly and produce surprisingly wrong results. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. We will first introduce the problem of neural network verification and the challenges involved to guarantee neural network output given bounded input perturbations. Then, we will discuss the bound propagation-based neural network verification algorithms such as CROWN and beta-CROWN, which efficiently propagate linear inequalities through the network in a backward manner. State-of-the-art verification techniques will be highlighted which are used in our alpha-beta-CROWN verifier, a scalable, powerful and GPU-accelerated neural network verifier that won the 2nd International Verification of Neural Networks Competition (VNN-COMP21) with the highest total score.