A1219
Title: Zero-order parallel sampling
Authors: Giacomo Zanella - Bocconi University (Italy) [presenting]
Francesco Pozza - Università Bocconi (Italy)
Abstract: Finding effective ways to exploit parallel computing in order to speed up MCMC convergence is an important problem in Bayesian computation and related disciplines. The zero-order (aka derivative-free) version of the problem is considered, where it is assumed that (a) the gradient of the target distribution is unavailable (either for theoretical, practical or computational reasons) and (b) the (expensive) target distribution can be evaluated in parallel at K different locations, and those evaluations can be used to speed up MCMC convergence. Two main contributions are provided in this respect. First, any method falling within a quite general multiple-proposal framework is shown to speed up convergence by log(K) factors in high dimensions. The fundamental limitation of such a framework, which includes multiple-try MCMC as well as many other previously proposed methods, lies in restricting possible moves to support the K evaluation points. The results are stated in terms of upper bounds on the spectral gap of the resulting scheme. Second, two ways are discussed (one based on stochastic gradient estimators and the other based on factorized proposals), which make better use of parallel computing and achieve polynomial speed-ups in K. Some of the methods share similarities, but also notable differences, with classical zero-order optimization methods.