A0384
Title: Provable guarantees for Bayesian neural networks
Authors: Matthew Wicker - Imperial College London (United Kingdom) [presenting]
Abstract: To achieve the significant and substantial potential of modern machine learning in deployment, models must be optimized for both performance and reliability. It is no secret that deterministic neural networks suffer from diverse, critical failure modes, making their deployment challenging and requiring practitioners to formally verify that their NNs are safe for deployment. On the other hand, Bayesian neural networks (BNNs) appear to have favourable trustworthiness properties, including heightened robustness and fairness. In addition, BNNs offer many other considerable theoretical advantages. However, practical or theoretical advantages cannot be realized without meeting the bar of formal certification required of deterministic neural networks. Studies which provide empirical evidence for the heightened trustworthiness of BNNs are reviewed. Then, the broad spectrum of properties of interest for verification is covered when given a BNN, which will provide an intuitive methodology for computing formal guarantees for such properties. It is shown how to incorporate these properties into the likelihood at inference time to infer BNN posteriors with provably trustworthy predictions. Finally, the limitations of the methods presented are concluded with, and a brief but systematic coverage of some significant open research questions in the area are provided.