CMStatistics 2022: Start Registration
View Submission - CMStatistics
B0752
Title: The difficulty of computing stable and accurate neural networks: On the barriers of deep learning \& Smale 18th problem Authors:  Matthew Colbrook - University of Cambridge (United Kingdom) [presenting]
Vegard Antun - Oslo (Norway)
Anders Hansen - University of Cambridge (United Kingdom)
Abstract: Deep learning (DL) has had unprecedented success and is now rapidly entering scientific computing (SC). However, DL suffers from a universal phenomenon: instability, despite universal approximation results that often guarantee the existence of stable and accurate neural networks (NNs). We show the following paradox. There are well-conditioned problems in SC where one can prove the existence of NNs with great approximation qualities; however, there does not exist any algorithm that can train such a NN. For any positive integers $n>2$ and $M$, there are cases where simultaneously: (a) no algorithm can train a NN correct to $n$ digits, (b) there exists an algorithm that trains a NN with $n-1$ correct digits, but any such algorithm needs arbitrarily many training data, (c) there exists an algorithm that trains a NN with $n-2$ correct digits using no more than $M$ training samples. These results provide basic foundations for Smale's 18th problem and imply a classification theory describing conditions under which (stable) NNs with a given accuracy can be trained. We begin this theory by initiating a unified theory for compressed sensing and DL, leading to sufficient conditions for the existence of training algorithms for stable NNs in inverse problems. We introduce FIRENETs, which we prove and numerically verify are stable. FIRENETs only require $O(\log(1/\epsilon))$ hidden layers for an $\epsilon$-accurate solution to the inverse problem.