A0325
Title: Detecting adversarial examples with Bayesian neural network
Authors: Yao Li - University of North Carolina at Chapel Hill (United States) [presenting]
Tongyi Tang - University of California Davis (United States)
Thomas Lee - University of California at Davis (United States)
Cho-Jui Hsieh - University of California Los Angeles (United States)
Abstract: A new framework is proposed to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors and make it easier to simulate the output distribution of a deep neural network. With these observations, we propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection. In specific, we study the distributional difference of hidden layer output between natural and adversarial examples, and propose to use the randomness of Bayesian neural network (BNN) to simulate hidden layer output distribution and leverage the distribution dispersion to detect adversarial examples. The advantage of BNN is that the output is stochastic while neural networks without random components do not have such characteristics. Empirical results on several benchmark datasets against popular attacks show that the proposed BATer outperforms the state-of-the-art detectors in adversarial example detection.