CMStatistics 2021: Start Registration
View Submission - CMStatistics
B0935
Title: Dimension-robust neural network priors for Bayesian reinforcement learning Authors:  Torben Sell - University of Edinburgh (United Kingdom) [presenting]
Sumeetpal Singh - Cambridge University (United Kingdom)
Abstract: A new neural network-based prior is discussed for real-valued functions on $R^d$ which, by construction, is more easily and cheaply scaled up in the domain dimension d compared to the usual Karhunen-Loeve function space prior a property we refer to as ``domain dimension robustness''. The new prior is a Gaussian neural network prior, where each weight and bias has an independent Gaussian prior, but with the key difference that the variances decrease in the width of the network in such a way that the resulting function is almost surely well defined in the limit of an infinite-width network. We show that in a Bayesian treatment of inferring unknown functions, the induced posterior over functions is amenable to Monte Carlo sampling using Hilbert space Markov chain Monte Carlo (MCMC) methods. This type of MCMC is popular, e.g. in the Bayesian Inverse Problems literature, because it is stable under mesh refinement, i.e. the acceptance probability does not shrink to 0 as more parameters of the functions prior are introduced, even ad infinitum. We also implement examples in Bayesian Reinforcement Learning to automate tasks from data and demonstrate, for the first time, the stability of MCMC to mesh refinement for these types of problems.