A0439
Title: Zero-order parallel sampling
Authors: Francesco Pozza - Università Bocconi (Italy) [presenting]
Giacomo Zanella - Bocconi University (Italy)
Abstract: Finding effective ways to exploit parallel computing to speed up MCMC convergence is an important problem in Bayesian computation and related disciplines. The zero-order (aka derivative-free) version of the problem is considered, where it is assumed that (a) the gradient of the target distribution is unavailable (either for theoretical, practical, or computational reasons) and (b) the (expensive) target distribution is evaluated in parallel at K different locations and these evaluations are used to speed up MCMC convergence. Two main contributions are made in this respect. First, it is shown that any method falling within a fairly general "multiple proposal framework" can only speed up convergence by log(K) factors in high dimensions. The fundamental limitation of such a framework, which includes multiple-try MCMC as well as many other previously proposed methods, is that it restricts possible moves to the support of the K evaluation points. Results are stated in terms of upper bounds on the spectral gap of the resulting scheme. Second, it is discussed how stochastic gradient estimators can be used to make better use of parallel computing and achieve polynomial speedups in K. Some of the methods have similarities, but also notable differences, with classical zero-order optimization methods.